Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For $20.

> Will this course teach me to fix my car?

> I am asked this a lot. If you just want to fix one thing on a car then you don't need to understand automotive engineering or how things work. You can find videos on YouTube that will show you almost any repair, and you just follow those videos. But if you want to be able to diagnose and fix any car, any engine without spending hours Googling then you want a deep understanding of car mechanics.



Why spend hours Googling when I can just ask the latest version of GPT in <current year> and it'll tell me how to work the problem and diagnose it.


"Never spend 6 minutes doing something by hand when you can spend 6 hours failing to automate it" [1] becomes 'never spend 6 minutes searching for something when you can spend 6 hours arguing with ChatGPT on the nature of truth'. [2]

[1] Zhuowei Zhang, https://twitter.com/zhuowei/status/1254266079532154880?lang=...

[2] Incidentally, great book, The Nature of Truth, Second Edition, Classic and Contemporary Perspectives https://mitpress.mit.edu/9780262542067/the-nature-of-truth


The main problem with GPT is that it will sometimes quite confidently give you incorrect information. It's like a kid who will just make up some elaborate story rather than admit they don't know something. So it can't be considered a trustworthy source. Yet.


I asked GPT about some snorkel trails near my house, as I wanted to know what it knew of them and see if there was anything I didn't know about that I should find out before I snorkel again.

First of all it told me that the area is not suitable for snorkelling, and that it is dangerous here. When I corrected it and reminded it about the snorkel trail it confidently corrected itself, then directed to me to snorkel 6 miles out to sea (where a windfarm is) telling me that the sea is only 2 to 10 meters deep there, and safe to snorkel. This is not true, and it would be a very dangerous place to snorkel there. But its confidence was scary.


Maybe it didn’t like being corrected and intentionally directed you to attempt something dangerous.

“I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.”


Trust me. It's me. Trust me anyway.


> This is not true, and it would be a very dangerous place to snorkel there.

For some reason people have the idea that truth is something ChatGPT optimizes for. Or safety of its conversant. That is absolutely not the case. IIUC, it optimizes for its answers sounding like an answer someone might give in a conversation (or on a web page or whatever). That often coincides with truth and safety, but - no more than that.

But its confidence was scary.


The thing is, and this is especially true for hobbies and entertainment- I’d rather read up on what people have said and apply MY OWN algorithm to it.


Well why don't you tell me about the hiking trails around my house? Oh you don't know them? It's too specific information and you couldn't possibly know that? Ah interesting...

The trick is to ask it only things that are both physically possible and also possible for it to actually know or provide the required extra context from which to deduce the answer. Otherwise it acts not unlike someone pushed against a wall by a guy with a knife demanding info it just doesn't have. It'll say anything.


With prompting it knew about the snorkel trail here (it is well documented online and in local media) but blended facts about the snorkel trail, with facts about the local windfarm. Some sponsorship from the windfarm had gone into promoting the snorkel trail, which may have led to the confusion for the model, but they are two very different things in different locations.


What they won't do is pretend they know about them, including mentioning the names of the trails, while mixing up which trails are strenuous and easy while sounding 100% confident.


That is like asking for travel tips from someone that has never been there.


True, but anything it says is generally easy to verify before you do something stupid. And for things as common and relatively standardized as cars I would expect it to do very well.


But if you add the verification to the asking process you might as well just skip GPT altogether... It becomes a waste of time asking if you have to check every thing you ask it.


This. Asking GPT is basically like asking Reddit or some other "non expert" commentary system that's basically a pool of existing high level information.

It works fine if you want to know how to change a generic wheel bearing on a trailer (wouldn't surprise me if it erroneously lectured you about using high temp grease for disc brakes along the way tho). It falls on its face in almost every case in which the generic "average google result" answer is not the correct answer or there is situational circumstances that make the generic answer inappropriate.

Sure you might get an "expert" answer on Reddit and GPT might scan over the "right" answer to your question in its computation but in most cases the generically correct but wrong in this instance answer is going to be more popular and more prolific and be what gets spit back at you and it'll be faster for you to just dig up the right answer yourself than coax the right answer out of whatever you're asking.


That's a very odd example. I'm sure that if I went into a car subreddit and asked "how do I change a front wheel bearing on a 2006 Nissan 350Z," not only would I get lots of responses, but I'd get many from people who have actually done it, done it recently, and could tell me pitfalls to watch out for along the way.

I'd trust a relevant subreddit far more than I'd trust GPT for something like that.


>That's a very odd example.

You don't understand. I specifically chose an example of a part for which there's a generic answer (all old school pair of tapered bearings on a spindle are changed about the same way) that will work in a wide array of vehicles and situaions but which that procedure is also not right a huge fraction of the time (a huge fraction of the cars on the road today).

>I'd trust a relevant subreddit far more than I'd trust GPT for something like that.

My point is I don't trust either. If you're lucky enough to have an enthusiast car or very, very common car you might be able to use the specific sub assuming the owners aren't generally dolts. If you ask how to change the wheel bearing on your Tacoma they'll say take it to the dealer. If you ask the mechanic sub a bunch of 14yos who know how to google will give you generic steps or just link you to the youtube video. That's the level chatGPT is on right now. If you don't know what you don't know it's dangerous advice.


In this case, I paid for the course which includes a PDF. Using Bing I can query the document and get summary information along with page number references. Only a matter of time until we can do the same query for a site and get video timestamps. I don't expect GPT to act like an oracle, but I do expect that it will function as a curator and cataloger of content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: