Does that mean you don't think you learned anything valuable through the experience of working through this complexity yourself?
I'm not advocating for everyone to do all of their math on paper or something, but when I look back on the times I learned the most, it involved a level of focus and dedication that LLMs simply do not require. In fact, I think their default settings may unfortunately lead you toward shallow patterns of thought.
I wouldn't say there is no value to it, but I do feel like I learned more using LLMs as a companion than trying to figure everything out myself. And note, using an LLM doesn't mean that I don't think. It helps provide context and information that often would be time consuming to figure out, and I'm not sure if the time spent is proportional to the learning I'd get from it. Seeing that these memory locations mapped to sprites that then get mapped to those memory locations, which map to the video display -- are an example of things that might take a minute to explore to learn, but the LLM can tell me instantly.
I think the difficulty I have is that I don't think it's all that straightforward to assess how it is exactly that I came not just to _learn_, but to _understand_ things. As a result, I have low confidence in knowing which parts of my understanding were the result of different kinds of learning.
Learning things the hardest way possible isn't always the best way to learn.
In a language context: Immersion learning where you "live" the language, all media you consume is in that language and you just "get" it at some point, you get a feel for how the language flows and can interact using it.
vs. sitting in a class, going through all the weird ways French words conjugate and their completely bonkers number system. Then you get tested if you know the specific rule on how future tenses work.
Both will end up in the same place, but which one is better depends a lot on the end goal. Do you want to be able to manage day-to-day things in French or know the rules of the language and maybe speak it a bit?
I'd say this is similar to working with assembly vs c++ vs python. Programming in python you learn less about low level architecture trivia than in assembly, but you learn way more in terms of high level understanding of issues.
When I had to deal with/patch complex c/c++ code, I rarely ever got a deep understanding of what the code did exactly - just barely enough to patch what was needed and move on. With help of LLMs it's easier to understand what the whole codebase is about.
> Every promising engineer, designer, or operator is being courted by three, five, ten different AI startups, often chasing the same vertical, whether it’s coding copilots, novel datasets, customer service, legal tech, or marketing automation.
I still don't understand what should this "wildfire" burn. My perspective is very limited, but where are the pets.com of AI today? Where are all the small companies with improbable business cases that are getting absurd valuations/ investments because they're in AI? The space seems mostly dominated by huge players that, while burning tons of cash, are still making real progress on something that will have more economic impact than society can actually bear. Who should be wiped out by the wildfire? Anthropic?
From how stock valuations look like, they had an insane rally from the release of ChatGPT to around mid-2024, from which point it stayed mostly on a consistent trajectory with the rest of the economy.
I think a huge breakthrough for AI was priced in, and we are still waiting to find out if it will come and what it'll be.
Personally, as this article seems investment focused, I see no downside to diversifying away into more varied kind of investments, but then again, I'm not a pro, so take it with a grain of salt..
Grammarly certainly comes to mind, for being essentially a free feature of most chat AIs now.
Interestingly this time around I could see the 'fire' affecting mid-large corporations (or at least some divisions of them) if they don't adapt. Adobe, being heavily focused on graphic design seems like it could be under pressure. Low-end consulting / outsourcing is largely doing the same work AI is good at. Similarly with technical gig-work (like Upwork).
To be fair, if you look at language learning reddit, there are about 10 ads a day for shovelware of AI powered apps that no one ever needed. They would be those pets.com
Maybe I just didn't notice. Fair. But are these ads from companies that are collecting large capitals, or from small shops that just use the APIs provided by the few big players?
Not parent - but I've noticed those same 'start ups' and they just seem to be today's hustle-bro crypto/drop-ship/mobile-app/ceo-with-no-employees/self-help-book/low-effort grift (bullshit).
I'm sure some of them have managed to shake some change out of the VCs but these wanna-be shovel sellers are just gonna let their domains expire and move on to the next scheme with little overall damage to the economy.
I am pretty sure they use API and dont have millions on training.
I have no idea about their financials. They just annoy me, because they mask their ads as posts/comments. And use ChatGPT to generate those, they are like 2 page long drivel.
Don't know about OpenAI, but Claude wrote almost all of my code in the past few days, multiplying my productivity by a factor of at least two. My feeling is that for some use cases Anthropic could already charge enterprises a significant fraction of each developer's salary and it would still be a net gain for customers.
I'm more pessimistic. It costs too much to go back to college and retrain. The result is going to be a generation of ambitious people doing a craft they hate. The results are going to be dismal.
The current state does not feel malicious in this way to me at all. It feels bumbling and amateurish. It gives the feeling that the people who kept the product cohesive have left or retired, and that a new generation of overly ambitious careerists have entered positions of leadership.
I’m convinced leadership at Apple are not power users. They’ve never put MacOS through their paces, or did any development themselves it seems. If they did they would have found all of the bugs and irregularities and huge performance problems themselves.
What do you mean by "largely the mindset they have"? I think the comment you're replying to is right, most Apple execs probably have jobs that can be done entirely on iPads, so none of the complaints by power users about macOS resonate at all (and this group is sadly far too small of a minority to have any financial impact).
I think any organization at Apple's scale has no shortage of skilled workers and ambitious careerists. But at the product level, I do believe that the result you see is generally an honest reflection of the organization's priorities.
If Apple wanted to ship a rock-solid OS, they could. They're just choosing to put those resources elsewhere.
The current environment is in some ways indistinguishable from COVID. The uncertainty of AI, forced RTO, and processions of layoffs have produced a terrible environment for retaining people who have the means to do literally anything else.
I feel like it says a lot, when intelligent amorality seems genuinely preferable to blundering incompetence. Many such cases. One wonders how much "enshittification" is intrinsic to networked software and our late-stage-whatever political economy, versus how much is a farcical byproduct of office politics and org chart turf wars.
If you did not study these topics, the chances are good you do not know what questions to even ask, let alone how to ask them. Add to the fact that you don't even know if the original summary is accurate.
The original summary is the paper’s abstract, which I read. The questions I ask are what I don’t understand or am curious about. Chances are 100% that I know what these are!
I’m not trying to master these subjects for any practical purpose. It’s curiosity and learning.
It’s not the same as taking a class; not worse either. It’s a different type of learning for specific situations.
Asking the right questions (in the right language) was important before and it's even more important with LLMs, if you want to get any real leverage out of them.
How does the wand know what I'm flicking it at? What if I miss? Maybe the wand thinks I'm targeting some tiny organism that lives on the organism that I'm actually targeting. Can I target the wand with itself?
reply