Hacker Newsnew | past | comments | ask | show | jobs | submit | cjlm's commentslogin

The sycophancy from Claude is incredibly jarring. I agree with Ethan Mollick that this could turn out to have more of a disastrous impact than AI hallucination.

https://www.linkedin.com/posts/emollick_i-am-starting-to-thi...


It's even a blocker for some design patterns. Ie it's difficult to discuss options and choose the best one when the AI agrees with you no matter what. If you ask "But what about X" it is more likely to reverse course and agree with your new position entirely.

It's really frustrating. I've come to loathe the agreeable tone because every time i see it i remember the times where i've hit this pain point in design.


I absolutely hate this too. And the only way around it is to manipulate it into cheerfully pointing out all the problems with something in a similarly sycophantic way.


If found that three words help "critical hat on". Then you get the real talk.


What a ridiculous world we live in.


In my ChatGPT customization prompt I have:

    Not chatty.  Unbiased.  Avoid use of emoji.  Rather than "Let me know if..." style continuations, list a set of prompts to explore further topics.  Do not start out with short sentences or smalltalk that does not meaningfully advance the response.
I want an intelligent agent (or one that pretends to be) that answers the question rather than something that I chat with.

As an aside, I like the further prompt exploration approach.

An example of this from the other day - https://chatgpt.com/share/68767972-91a8-8011-b4b3-72d6545cc5... and https://chatgpt.com/share/6877cbe9-907c-8011-91c2-baa7d06ab4...

One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now).

However, that's a me thing - something that I do (or avoid doing) with how I interact with an LLM. As noted with the stories of people following the advice of an LLM, it isn't something that is universal.


Thank you so much for sharing your customizations and conversations, it is really fascinating and generous!

In both of your conversations, there is only one depth of interaction. Is that typical for your conversations? Do you have examples where you iterate?

I think your meta-cognitive take on the model is excellent:

"One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now)."

The only thing I would add is that, as a reference librarian, it can surface template decision-making patterns.

But I think it's more like that cognitive trick where you assign outcomes to the sides of a coin, and you flip it, and see how you brain reacts — it's not because you're going to use the coin to make the decision, but you're going to use the coin to induce information from your brain using System 1.


I do have some that I iterate on a few times, though their contents aren't ones that I'd be as comfortable making public.

In general, however, I'm looking for the sources and other things to remember the "oh yea, it was HGS-1" that I can then go back and research outside of ChatGPT.

Flipping a coin and then considering how one feels about the outcome and using that to guide the decision is useful. Asking ChatGPT and then accepting its suggestion is problematic.

I believe that there's real damager in ascribing prophecy, decision making, or omniscience to an LLM. (aside: Here's an iterative chat that you can see leading to help picking the right wording for this bit - https://chatgpt.com/share/68794d75-0dd0-8011-9556-9c09acd34b... (first version missed the link))

I can see it as something that's real easy to do. And even back to Eliza and people chatting with that, and I see people trusting the advice as a way of offloading some of their own decision making agency to another thing - ChatGPT as a therapist is something I'd be wary of. Not that it can't make those decisions, but rather that it can't reject the responsibility of making those decisions back to the person asking the question.

To an extent, being familiar with the technology and having the problems of decision fatigue ( https://en.wikipedia.org/wiki/Decision_fatigue ) that, as a programmer, I struggle with in the evening (not wanting to think anymore since I'm all thought out from the day)... ChatGPT would be so easy to let it do its thing and make the decisions for me. "What should I have for dinner?" (Aside: this is why I've got a meal delivery subscription so that I don't have to think about that because otherwise I snack on unhealthy food or skip dinner).

---

One of the things that disappointed me with the Love, Death & Robots adaptation of Zima Blue ( https://youtu.be/0PiT65hmwdQ ) was that it focused on Zima and art and completely dropping the question of memory and its relation to art and humanity (and Carrie). The adaptation focuses on Zima's story arc without going into Carrie's story arc.

For me, the most important part of the story that wasn't in the adaptation follows from the question "Red or white, Carrie?" (It goes on for several pages in a socratic dialog style that would be way too much to copy here - I strongly recommend the story).


I'm struck by how often Claude responds with "You're right! Now let me look at the file..." when it can't know whether I'm right until after it looks at the file in question.


They have introduced a beta 'Preferences' feature recently under Custom Instructions. I've had good results from this preference setting in GPT:

    Answer concisely when appropriate, more 
    extensively when necessary.  Avoid rhetorical 
    flourishes, bonhomie, and (above all) cliches.  
    Take a forward-thinking view. OK to be mildly 
    positive and encouraging but NEVER sycophantic 
    or cloying.  Above all, NEVER use the phrase 
    "You're absolutely right."
I just copied it into Claude's preferences field, we'll see if it helps.


What am I missing? I am not seeing this particular example as sycophantic. Claude is saying, something like user's assertion is improbable but if it was the case, user needs to show/ prove some of the things in this table.


First, I think various models have various degrees of sycophancy — and that there are a lot of stereotypes out there. Often, the sycophancy, is a "shit sandwich" — in my experience, the models I interact with do push back, even when polite.

But for the broader question: I see sycophancy as a double‑edged sword.

• On one side, the Dunning–Kruger effect shows that unwarranted praise can reinforce over‑confidence and bad decisions.

• On the other, chronic imposter syndrome is real—many people underrate their own work and stall out. A bit of positive affect from an LLM can nudge them past that block.

So the issue isn't "praise = bad" but dose and context.

Ideally the model would:

1. mirror the user's confidence level (low → encourage, high → challenge), and

2. surface arguments for and against rather than blanket approval.

That's why I prefer treating politeness/enthusiasm as a tunable parameter—just like temperature or verbosity—rather than something to abolish.

In general, these all-or-nothing, catastrophizing narratives in AI (like in most places) often hide very interesting questions.


aerospace is good!


cjlm.ca


see also https://flowchart.fun for a working version of this idea


is that your project? looks pretty good and although it's not an automagic thing it makes immediate sense


https://leaving.live - a site that tells you when other people leave the site


See also "My Daily Organizer" from My Deep Guide [0]

[0] https://www.mydeepguide.com/shop


Check out Eleventy[0] - written in JavaScript but largely no-nonsense with (relatively) few dependencies.

[0] https://www.11ty.dev/


We've used Eleventy on a project a year ago and it was super easy to get running. It is not tied to any frontend framework (but if you want an interactive part on the page, you will have to set it up yourself, in contrast to Astro's Islands [1]).

My overall impression is: use 11ty for a first version, use Astro when you have more moving parts on the frontend.

Little bonus for those coming from Python background (like me): 11ty uses Nunjucks, which is a JavaScript port of Jinja, so the templating system feels right at home.

[1]: https://docs.astro.build/en/concepts/islands/


flowchart.fun [0] may scratch the same itch – I just interviewed the sole developer [1]

0: https://flowchart.fun/

1: https://sourcetarget.email/editions/43


I'm a big fan of Workflowy [0] but I've been keeping an eye on a Bike-like outliner called Zavala [1] – written with Swift and already has an iOS app.

[0] https://workflowy.com/ [1] https://zavala.vincode.io/


I've been using Workflowy for years now. I really love it.

They been working lots of interesting features in but stick to the basics that everything starts as a list. You would be surprised what is possible w/ just a list!


Your repo gave me the inspiration to do the same with Eleventy [0] Thank you!

[0] https://cjlm.ca/reading/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: