Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thank you so much for sharing your customizations and conversations, it is really fascinating and generous!

In both of your conversations, there is only one depth of interaction. Is that typical for your conversations? Do you have examples where you iterate?

I think your meta-cognitive take on the model is excellent:

"One part of this in comparison with the linked in post is that I try to avoid delegating choices or judgement to it in the first place. It is an information source and reference librarian (that needs to be double checked - I like that it links its sources now)."

The only thing I would add is that, as a reference librarian, it can surface template decision-making patterns.

But I think it's more like that cognitive trick where you assign outcomes to the sides of a coin, and you flip it, and see how you brain reacts — it's not because you're going to use the coin to make the decision, but you're going to use the coin to induce information from your brain using System 1.



I do have some that I iterate on a few times, though their contents aren't ones that I'd be as comfortable making public.

In general, however, I'm looking for the sources and other things to remember the "oh yea, it was HGS-1" that I can then go back and research outside of ChatGPT.

Flipping a coin and then considering how one feels about the outcome and using that to guide the decision is useful. Asking ChatGPT and then accepting its suggestion is problematic.

I believe that there's real damager in ascribing prophecy, decision making, or omniscience to an LLM. (aside: Here's an iterative chat that you can see leading to help picking the right wording for this bit - https://chatgpt.com/share/68794d75-0dd0-8011-9556-9c09acd34b... (first version missed the link))

I can see it as something that's real easy to do. And even back to Eliza and people chatting with that, and I see people trusting the advice as a way of offloading some of their own decision making agency to another thing - ChatGPT as a therapist is something I'd be wary of. Not that it can't make those decisions, but rather that it can't reject the responsibility of making those decisions back to the person asking the question.

To an extent, being familiar with the technology and having the problems of decision fatigue ( https://en.wikipedia.org/wiki/Decision_fatigue ) that, as a programmer, I struggle with in the evening (not wanting to think anymore since I'm all thought out from the day)... ChatGPT would be so easy to let it do its thing and make the decisions for me. "What should I have for dinner?" (Aside: this is why I've got a meal delivery subscription so that I don't have to think about that because otherwise I snack on unhealthy food or skip dinner).

---

One of the things that disappointed me with the Love, Death & Robots adaptation of Zima Blue ( https://youtu.be/0PiT65hmwdQ ) was that it focused on Zima and art and completely dropping the question of memory and its relation to art and humanity (and Carrie). The adaptation focuses on Zima's story arc without going into Carrie's story arc.

For me, the most important part of the story that wasn't in the adaptation follows from the question "Red or white, Carrie?" (It goes on for several pages in a socratic dialog style that would be way too much to copy here - I strongly recommend the story).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: