I think you could make similar arguments about mapping technology like Google and Apple Maps -- that using them decreases people's skills in navigating the physical world, atrophying our sense of direction and geography.
And actually, that's not wrong. People really do often struggle to navigate these days if they don't have the crutch of something like Google Maps. It really has changed our relationship to the physical world in many ways.
But also, a lot of people weren't especially good at navigation before? The overall average ability of people being able to get from Point A to Point B safely and reliably, especially in areas they are unfamiliar with, has certainly increased dramatically. And a small subset of people who are naturally skilled at geography and navigation have seen their own abilities complemented, not replaces, by things like Google Maps.
I think AI will end up being similar, on a larger scale. Yes, there are definitely some trade offs, and some skills and abilities will decrease, but also many more people will be able to do work they previously couldn't, and a small number of people will get even better at what they do.
> I think you could make similar arguments about mapping technology like Google and Apple Maps
The problem is that mapping software is reliable and doesn't spit out a result of what is essentially a random number generator. You can rely on its output, the same way you can rely on a calculator. Not always, mind you, because mapping the entire globe is a massively complex task with countless caveats and edge cases, but compared to LLM output? Even with a temperature setting of 0 with the same prompt regenerated multiple times, you'll be getting vastly different output.
Also, since LLMs cover a much more broad swathe of concepts, people are going to be using these instead of their brains in a lot of situations where they really shouldn't. Even with maps, there are people out there that will drive into a lake because Google Maps told them that's where the street was, I can't even fathom the type of shit that's going to happen from people blindly trusting LLM output and supplanting all their thinking with LLM usage.
> The problem is that mapping software is reliable and doesn't spit out a result of what is essentially a random number generator.
Not really.
I am not good at navigation yet love to walk around, so I use a set of maps apps a lot.
Google Maps is not reliable if you expect optimal routes, and its accuracy sharply falls if you're not traveling by car. Even then, bus lanes, prioerty lanes, time limited areas etc. will be a bloodbath if you expect Maps to understand them.
Mapping itself will often be inacurate in any town that isn't frozen in time for decades, place names are often wrong, and it has no concept of verticality/3D space, short of switching to limited experimental views.
Paid dedicated map apps will in general work a lot better (I'm thinking hiking maps etc.)
All to say, I'd mostly agree with parent on how fuzzier Maps are.
As someone that has been sent into barely usable mountain roads, militar compounds, or dried river beds multiple times in a couple of Mediterrean islands, I beg to differ with mapping software is reliable assertion.
>The problem is that mapping software is reliable and doesn't spit out a result of what is essentially a random number generator.
Actually, TSP is NP-hard (ie, at best, you never know whether you've been given the optimal route) in the general case, and Google maps might even give suboptimal routes intentionally sometimes, we don't know.
The problems you're describing are problems with people and they apply to every technology ever. Eg, people crash cars, blow up their houses by leaving the stove on, etc.
> I think you could make similar arguments about mapping technology like Google and Apple Maps -- that using them decreases people's skills in navigating the physical world, atrophying our sense of direction and geography.
And actually, that's not wrong. People really do often struggle to navigate these days if they don't have the crutch of something like Google Maps. It really has changed our relationship to the physical world in many ways
Entirely anecdotal but I have found the opposite. With this mapping software I can go walk in a random direction and confidently course correct as and when I need to, and once I’ve walked somewhere the path sticks in my memory very well.
Also worth mentioning that tools have stable output. An LLM is not a tool in that sense – it’s not reproducible. Changing the model, retraining, input phrasing etc can change dramatically the output.
The best tools are transparent. They are efficient, fast and reliable, yes, but they’re also honest about what they do! You can do everything manually if you want, no magic, no hidden internal state, and with internal parts that can be broken up and tested in isolation.
With LLMs even the simple act of comparing them side by side (to decide which to use) is probabilistic and ultimately based partly on feelings. Perhaps it comes with the territory, but this makes me extremely reluctant to integrate it into engineering workflows. Even if they had amazing abilities, they lower the bar significantly from a process perspective.
> An LLM is not a tool in that sense – it’s not reproducible.
LLMs are perfectly reproducible. Almost all public services providing them are not. The fact that changing the model changes the output doesn't make it not reproducible, in the same way reproducible software packages depend on a set version of the compiler. But you can run a local model with zero temperature, set starting conditions and you'll get the same response every time.
I know it’s technically reproducible under the right conditions, and sure it might help in some cases. But it matters little in practice – the issue is that it’s unstable relative to unrelated parameters you often have good reason to change, often uninitentionally. For instance, you can and will get vastly different output based on usual non-semantic variations in language. I’m not a logician, but this is probably even a necessity given the ginormous output space LLMs operate on.
My point is that it’s not a tool, because good tools reliably work the same way. If, for instance, a gun clicks when it’s supposed to fire, we would say that it malfunctioned. Or it fires when the safety is on. We can define what should happen, and if something else happens, there is a fault.
I think you’re both right, but the key is walking vs driving. Walking gives you time to look around, GPS reduces stress, and typically you’re walking in an urban location with landmarks.
Driving still requires careful attention to other drivers, the world goes by rapidly, and most roads look like other roads.
Me too. It's not like i ever used to have a map with me when i was in city i thought i knew.
With map in my pocket i started to use it and memorized it much better. My model of the city is much stronger. For example i know proximate directions of neighborhoods i've never even visited.
> The overall average ability of people being able to get from Point A to Point B safely and reliably, especially in areas they are unfamiliar with, has certainly increased dramatically.
And actually, that's not wrong. People really do often struggle to navigate these days if they don't have the crutch of something like Google Maps. It really has changed our relationship to the physical world in many ways.
But also, a lot of people weren't especially good at navigation before? The overall average ability of people being able to get from Point A to Point B safely and reliably, especially in areas they are unfamiliar with, has certainly increased dramatically. And a small subset of people who are naturally skilled at geography and navigation have seen their own abilities complemented, not replaces, by things like Google Maps.
I think AI will end up being similar, on a larger scale. Yes, there are definitely some trade offs, and some skills and abilities will decrease, but also many more people will be able to do work they previously couldn't, and a small number of people will get even better at what they do.