The problem is that intelligence isn't the result, or at the very least the ideas that word evokes in people don't match the actual capabilities of the machine.
Washing is a useful word to describe what that machine does. Our current setup is like if washing machines were called "badness removers," and there was a widespread belief that we were only a few years out from a new model of washing machine being able to cure diseases.
Arguably there isn't even a widely shared, coherent definition of intelligence: To some people, it might mean pure problem solving without in-task learning; others equate it with encyclopedic knowledge etc.
Given that, I consider it quite possible that we'll reach a point where even more people will consider LLMs having reached or surpassed AGI, while others still only consider it "sufficiently advanced autocomplete".
I'd believe this more if companies weren't continuing to use words like reason, understand, learn, and genius when talking about these systems.
I buy that there's disagreement on what intelligence means in the enthusiast space, but "thinks like people" is pretty clearly the general understanding of the word, and the one that tech companies are hoping to leverage.
The defining feature of true AGI, in my opinion, is that the software itself would decide what to do and do it without external prompts more than environmental input.
Doubly so if the AGI writes software for itself to accomplish a task it decided to do.
Once someone has software like that, not a dog that is sicced on a task, but a bloodhound that seeks out novelty and accomplishment for its own personal curiosity or to test its capabilities, then you have a good chance of convincing me that AGI has been achieved.
What about letting customers actually try the products and figure out for themselves what it does and whether that's useful to them?
I don't understand this mindset that because someone stuck the label "AI" on it, consumers are suddenly unable to think for themselves. AI as a marketing label has been used for dozens of years, yet only now is it taking off like crazy. The word hasn't change - what it's actually capable of doing has.
If I came up with something novel while watching a sunrise, which I wouldn't have come up with had I not been looking at it, where did the novelty really come from?
The backend has plenty of complexities, but frontend developers have to deal with something just as complex - the user.
Given ramp up time, most backend engineers could build a bad frontend, or build a good one if they have a really good UX team that thought through everything and are just implementing their work.
In the real world though where UX is understaffed and often focused on the wrong problems - I've had to rescue too many frontends built by backend focused teams to share your confidence.
The problem is less that those high level engineers are only good at deterministic work and more that they're only rewarded for deterministic work.
There is no system to pitch an idea as opening new frontiers - all ideas must be able to optimize some number that leadership has already been tricked into believing is important.
Every day I am more convinced that LLM hype is the equivalent of someone seeing a stage magician levitate a table across the stage and assuming this means hovercars must only be a few years away.
I believe there's a widespread confusion between a fictional character that is described as a AI assistant, versus the actual algorithm building the play-story which humans imagine the character from. An illusion actively promoted by companies seeking investment and hype.
AcmeAssistant is "helpful" and "clever" in the same way that Vampire Count Dracula is "brooding" and "immortal".
Could the recipe limitations be because of the danger an incorrect recipe could put the user in? It's probably unlikely you'd make something toxic, but a made up recipe could easily be a fire hazard.
The nuclear war scenario, according to all known information about nuclear war policies, is not “chunks of the world becoming unlivable.”
The principals in a nuclear conflict do not appear to even have a method to launch just a few nukes in response to a nuclear attacks: they will launch thousands of warheads at hundreds of cities.
I'm not arguing whatsoever against action on climate change, I'm just articulating actually how bad a nuclear exchange would be. It's far, far, far worse than most people imagine because they (understandably) couldn't fathom how monstrous the actual war plans were (and are, as far as anyone knows).
Daniel Ellsberg, the top nuclear war planner at RAND during the Cold War, claims that the Joint Chiefs gave an estimate to Kennedy in 1961 that they'd expect 600,000,000 deaths from the US's war plan alone.
That's 600 million people:
1. At 1960s levels of urban populations (especially in China, things have changed quite a lot -- and yes the plan was to attack China... every moderately large city in China, in fact!)
2. Using 1960s nuclear weapons
3. Not including deaths from the Russian response (and now Chinese), at the time estimated to be 50 - 90 million Americans
That's not extinction, that's not even close. Maybe, just maybe, it'd be the end of both western and eastern civilization, but it's nowhere near wiping out all life on Earth.
I was unclear, I meant chunks of the world becoming unlivable from climate change could easily be the catalyst for a nuclear war. So the worst case for both is the same in my mind.
My guess is that while it may not be too much effort to get a mostly accurate emulator that works well enough for hobbyist use, it'd be a lot of effort to get something up to the compatibility and usability standards of an official product.
Many older apps may use undocumented functionality or non obvious quirks of the system that an emulator may miss, which means you'd need to have a QA team testing individual apps for compatibility.
Part of Apple's brand is usability and a lack of rough edges. The downside of that is that building a tool like this up to their standards would be prohibitively costly.
It's weird to think that for people without clear memories of it, the 2000's might be imagined in the style of early digital photos the same way the 70's look like Super 8 footage or the early 20th century is sepia toned in my head.
Something that occurred to me the other day regarding my own "memories" of times before I was born:
From 1965–Present, the world is in color. The tone shifts a bit, but it's all in color.
1900–1964: Black and white. The world was actually black and white. Sometimes people moved around at a weird frame-rate :)
1800–1899: Color, but very much sepia toned. The whole world was slightly dirty and brownish, but the sky was still blue.
Big Bang–1799: Color, no sepia or nothing.
Of course this is all due to photography and movies. The general lack of both of those before the 20th Century (yeah, I know photography goes back further, but just barely. It wasn't widespread) means whatever images I get from that time period come from paintings and illustrations and written accounts. The sepia I perceive from the 1800s is basically thinking of them in terms of the railroads, westward expansion, etc.
So the result is that when I think about the Roman Empire—when was the last time you thought of that?—I see a richly-colored world like the one we have today. But when I think about WWII, it's all black and white. And the Civil War occurred in a vast, dusty, brownish field.
I have pretty much the same mental image, though with the 40's and 50's having some color probably mostly from period movies about that time.
What I really wonder is whether the photos and videos we're taking in the present will have some kind of subjective effect on how people remember our current time. It feels like we're finally at a point where most media has the fidelity to capture an objective image of the world, but that could just be that since we have the reference point of living through these times our minds can fill in the blanks.
Maybe the original comment I replied to's grandkid will want to borrow their old iPhone 12 to take retro pictures with someday.
I'm not sure a modern iPhone photo (say) is any more "objective" than a Kodachrome 64 slide, but there are sure going to be a lot more of them available!
Washing is a useful word to describe what that machine does. Our current setup is like if washing machines were called "badness removers," and there was a widespread belief that we were only a few years out from a new model of washing machine being able to cure diseases.