I think AGI implies it: I can learn something through audio and apply it visually and the other wat around. I don’t think that’s some abstract human quirk. Isn’t that what enabled literacy? It seems kind of obvious intelligence is beneath the modality, agnostic about it.
Maybe we can learn some lessons from digital artists who naturally fret over the use of their skills and how they will be replaced by stable diffusion and friends.
In one way, yes, this massively shifts power into the hands of the less skilled. On the other hand if you’d need some proper and I mean proper marketing materials, who are you going to hire? A professional artist using AI or some dipshit with AI?
There will be slop of course but after a while everyone has slop and the only differentiating factor will be quality or at least some gate-kept arbitrary level of complexity. Like how rich people want fancy hand made stuff.
Edit: my point is mainly that the level will rise to a point that you’d need to be scientist to create a - then - fancy app again. You see this with web. It was easy, we made it ridiculous and I mean ridiculously complicated where you need to study computer science to debug React rendering for your marketing pamphlet.
Yes, that’s true and very cool but you’re an expert. Where do the next generation you’s come from? The ones that did not do weeks of dead-end research which built resilience, skill and the experience to tell Claude now saves them time? You cannot skip that admittedly tedious part of life for free.
I think pro-AI people sometimes forget/ignore the second order effects on society. I worry about that.
On the other hand, I remember lots of stupid beginners questions I had, when learning to programm. My peers did not know them either and I had to wait sometimes days for the opportunity to ask someone advanced who knew. Blocking my progress.
(Asking online was a possibility, but instead of helpful answers, insults for being newb was the standard response)
With a LLM I would have had a likely correct answer immediately.
And yes, yes what if it is wrong?
Well, I was also taught plenty of wrong stuff from human teachers as well. I learned to think for myself. I doubt anyone decently smart who now grews up with those tools, think they are flawless.
In the end, you are responsible for the product. If it works, if it passes the tests, you succeeded. That did not change.
When I was a beginner programmer, I was 13 years old. I remember noticing that one kid in our class managed to do and use things that no one else in our class did. I asked him how, and he said "it's built-in, I read about it right here" and pointed to the Java API docs.
Assuming you're literate, there's no age or skill level at which it's necessary to get stuck churning on beginner-level questions. The option to RTFM is always available, right from the start.
To this day, readiness to RTFM (along with RTDS: read the damn source) is the biggest factor I can identify in the technical competency of my peers.
Yes, definitely. But I think reaching for an LLM can mean failing to build that reading muscle in the same way that leaning on teachers can. And I also think that many people never learn to read documentation not because they can't but because of a lack of willingness to try to learn to read specialized genres (of which technical documentation is just one).
A teacher can be a unique resource, but asking the teacher is often more of a reflexive shortcut than the thoughtful use of a unique resource.
I think use of LLMs (like StackOverflow before them) are more likely to discourage people from seriously or patiently reading documentation than they are to act as a stepping stone to a habit of more serious inquiry for most people.
To also be completely honest, either you have been really, really unlucky with your teachers, or you should improve on the way you ask questions.
I know I had mostly bad teachers and am largely a autodidact myself. But the few good teachers/instructors I had, were really helpful for my learning progress.
> My peers did not know them either and I had to wait sometimes days for the opportunity to ask someone advanced who knew. Blocking my progress.
Hypothetically, a solution to a problem that preoccupied you for days would translate into a more stable and long-lasting neuron configration in your brain (i.e. be remembered) than a solution to a problem that preoccupied you only for the time taken to type the prompt in.
That is somewhat true, figuring things out on my own makes me really understand something.
But I don't have the time and energy to figure everything out on my own and I stopped learning many things, where some useful hints in time likely would have kept the joy for me to master that topic.
There's definitely a balance. Someone told me years ago that when they'd look for one bug to try and fix it, they'd realize a bunch of other stuff about their code along the way. You learn a lot by struggling with a problem exactly when it feels unproductive. On the other hand, there are cases when maybe it's better to get an answer today than spend a week really learning something. For example if you don't care about how a library itself works, AI helps abstract the details away and maybe there really is no cost to that as long as you can see it works.
> Yes, that’s true and very cool but you’re an expert. Where do the next generation you’s come from?
I agree that this is a concern, and I even worry about it for myself. Did I miss the opportunity to add another brick to the foundation of my expertise because Claude helped me out? Would I be marginally better at solving the next problem if I'd worked through the week I saved?
Even if the concern isn't some specific knowledge I'd have gained - did I lose out on a few "reps" to build grit, determination? Am I training myself to only like easy solutions that come out of Claude? Are there problems I won't want to solve because they're too difficult for this new "augmented" workflow?
I don't know the answers - I can only say that I do care, and at the very least I'm aware that there are new dynamics affecting my work and expertise that are worthy of consideration.
I think there will always be people who want to look under the layers of abstraction, and people who don't care. All the abstractions we've created for computing has lowered the barrier of entry for people who want to create useful applications and otherwise don't care. If anything, LLMs make the process of learning for those in the former group much easier, something that only search really did previously.
I do think it's entirely plausible that a lot of people who otherwise would have wanted to learn more will grow up getting used to instant results and will simply not do anything the LLM can't do or tell them. Kind of similar to how my social media addicted brain gets antsy if it goes more than an hour without a fast dopamine hit (hence me being on HN right now...).
Didn't know other economic systems beat the fundamental nature of physics and reality where infinite isn't simply a concept. Are you sure you're considering a "charade" in the right direction?
That’s strange, because capitalism is the one that thinks infinity is real. Also it’s trying to break nature, quite literally given the state of our climate.
There are other ways to cooperate that don’t depend on sociopathy and infighting.
> That’s strange, because capitalism is the one that thinks infinity is real.
Your rhetoric doesn't pass. You contradict yourself in a single turn. Can't cite "scarcity" and "infinity" powers this fictional economic system you thought of as "capitalism".
You miss the point. It’s not I don’t believe in scarcity or the second law of thermodynamics, it’s that I critique capitalism’s handing of it that is by its very nature exploitative, short-sighted and unsustainable. It needs various and extensive guardrails to be functional at all otherwise it would have destroyed us already.
It’s the classic “capitalism is built on scarcity but behaves as if infinite growth is possible”-critique. There are interesting responses to that but “it’s contradictory” ain’t one of them.
If you earn some dough and are treated somewhat nice you have already hit the jackpot.
Don’t give up a perfectly good job just because you have power fantasy issues. You will always be a worker bee, that’s just how the world is set up. Someone or something will own your ass regardless of your compensation structure.
Then there is the beautiful issue of memory: maybe you are X consciousnesses but only one leaves a memory trace?
Consciousness and memory are two very different things. Don’t think too much about this when you have to undergo surgery. Maybe you are aware during the process but only memory-formation is blocked.
Or perhaps they all leave traces, but all write to the same log? And when reconstructing memory from the log, each constructed consciousness experiences itself as singular?
Which one controls the body? There is a problem there. You can’t just have a bunch of disembodied consciousnesses. Well, maybe.. but that sounds kind of strange.
It’s a single narrative that controls the body is what I mean. If one consciousness says “I am Peter” then other consciousnesses would know that and be conflicted about, if they don’t call themselves that.
What I mean is that a single narrative “wins”, not a multitude. This has to be explained somehow.
How do you know there aren't several different consciousnesses that all think they are Peter?
How do you know they aren't just constructing whatever narrative they prefer to construct from the common pool of memory, ending up with what looks like a single narrative because the parts of the narrative come from the same pool and get written back to the same pool?
Perhaps each consciousness is just a process, like other bodily processes.
Perhaps a human being is less like a machine with a master control and more like an ecosystem of cooperating processes.
Of course, the consciousnesses like to claim to be in charge, but I don't see why I should take their word for it.
No matter how you twist it at some point two consciousnesses differentiate on some contradictory issue maybe not name, but surely they differ on some issue otherwise they wouldn’t be .. different consciousnesses. Life as a human moves and is narrated as a single story, not the story of a thousand processes.
If that were true I can call my heart a process, my liver, etc. They are in a way part of me but they do not just ex nihilo cohere into a single narrative. That is an active process and whatever does that is the only really interesting one (IMO). So I think there might be a bunch of processes, sub personalities maybe, but there remains the problem of integration. Whatever integrates is the one that really fascinates me.
Anyway, thanks for indulging me. It is hard to go into any depth in this medium. I think you have really interesting ideas. Have a nice weekend.
At some moments there has to be a singular decision taken, such as which of two possible options to take. In such a moment some particular consciousness makes the decision, if it’s a decision made by a consciousness (though consciousness takes credit for more decisions than it actually makes, I think).
But granting that point does not grant that there is a single consciousness that is always (or ever) in charge, and it does not grant that any specific consciousness is associated with any specific singular narrative.
We know, scientifically speaking, some things that call the idea of a single consciousness with a single narrative into question. We know, for example from psychology of testimony that the same person’s memory of the same events differs at different times, and that the act of remembering rewrites memories. We have reason to suspect that the brain attributes to conscious choice decisions that are made too quickly for sensory data to reach the brain (and which may therefore be made elsewhere in the nervous system, even though the brain claims to have made the choice after the fact).
And I know from personal experience that some phenomena that normally appear to be singular conscious experiences can devolve into something else under some circumstances. For example, I have experienced blindsight, in which I cannot see something but can nevertheless collect accurate information from it by pointing my eyes at it. I have also experienced being asleep and awake at the same time.
Experiences like these are hard to account for if I assume that my consciousness is singular and continuous and in charge, but not so hard to account for if I assume that it’s a useful illusion cobbled together by a network of cooperating processes that usually (but not always) work well together. For example, many people might claim that it’s nonsense to say that a person can be asleep and awake at the same time, but it’s nonsense only if asleep and awake are mutually exclusive states of a singular consciousness. If, on the other hand, they are two neurological processes that are normally coordinated so that they don’t occur at the same time (because it’s less than useful for them to do so), then it’s not nonsense to observe that under unusual circumstances that coordination might be disrupted. Similarly, if seeing something is one process and consciously experiencing seeing it is a different process—normally, but not necessarily coordinated—then blindsight is not so hard to account for.
Not to mention that it’s trivially easy to find examples of consciousness not being in charge of our behavior, although it likes to think that it is.
I suggest that the supposed singular consciousness, supposedly in charge, may be an illusion constructed by a system of mostly, but not perfectly, coordinated cooperating processes.
Possible. Reminds me of playing the piano with both hands and other stuff like walking stairs, talking, carrying things, planning your day and thinking about some abstract philosophical thing at the same time. It’s not easy or natural, but I am not at all convinced it is impossible.