And two years prior IBM acquired Ahana (PrestoDB SaaS). Totally agree that businesses need to much more carefully assess the risks of moving to these hosted open source platforms. Reminds me of when over a decade ago companies moved to Snowflake for their DWs because "our Teradata costs are out of control".
A 25% reduction is huge, even if you account for the fact that people who get vaccines tend to be more health conscious to begin with, when you consider that outside of the very sick and very old Covid has a mortality rate under 1%.
I like to ask people who talk about a 1% mortality rate if they'd go to a football game in a stadium with 100k seats if 1k of those seats randomly had a small bomb attached.
I hate it when blanket statements like this creep in.
Which Covid? The initial version was definitely more deadly than later versions.
What about future covids? Are you willing to guarantee every version of covid from here on out will be less deadly? It is the general case to be true, but it is not some sort of law.
>So how to explain the current AI mania being widely promoted?
CEOs have been sold on the ludicrous idea that "AI" will replace 60-80% of their total employee headcount over the next 2-3 years. This is also priced into current equity valuations.
Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.
There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.
Honestly, just learn it like anything else. Understand the basic components of an internal combustion engine (block, crankshaft, rods, pistons, camshafts, cylinder heads, valves, intake and exhaust manifolds), the 4 cycles the engine goes through (intake, compression, power, and exhaust), how fuel delivery and ignition systems work. And then there are tons of resources on tuning and you can get the software for a laptop.
Then there is the building of the engine and understanding clearances for specific applications and RPM's, value train harmonics when thing start getting to crazy high revs like 9500.
Still very learnable but outside the scope of standard engine rebuilt stuff.
It isn't that simple. I've been learning to work on my own car over the last few years. I'm not even doing anything crazy just fixing up an older vehicle and modernising some parts of it (mainly interior).
I had to fix the wiper system. The wiper system you would think it wouldn't matter much whether the parts are aftermarket or not. I was very wrong, parts that even look almost identical may not work properly, due to differences in tolerances.
There is also different revisions of particular parts and it will become obsolete. You can lose an afternoon on the internet just doing that.
Then there is the tools. I've spent about a small fortune on tools. I have 3 torque wrenches, 3 sets of sockets, 3 sets of spanners and loads of weird specialist tools like special pliers. There are many jobs I can't do myself because they needs specialist knowledge to do properly e.g. gearboxes.
You have to be prepared to spend potentially years on it and huge amount of money, even on relatively simple projects.
There is a reason that a lot of guys get into old 4x4 pickups and do those up, because they are a known quantity and parts are readily available.
As someone building a particularly stupid car in a genre almost but not entirely unlike the OP (a turbo LS1-swapped Rover P5), I am not totally making stuff up when I say that this:
> You have to be prepared to spend potentially years on it and huge amount of money, even on relatively simple projects.
is not at all mutually exclusive to this:
> Honestly, just learn it like anything else.
I didn't really know what I was doing when I started my project. I had an idea and the desire to make it happen. I barely knew how to use a MIG to do the fab work, so I got good (enough) at it. I knew nothing about LS engines, so I learned enough about them at each point I needed to know something about them. I only have a vague idea of how I'm going to do the next phase of it; I know that I can figure it out with enough thinking and by making all the mistakes I need to make. I don't know how to TIG, and it'll be really useful if I do, so I am learning how to TIG.
Start somewhere, and the more you do, the more you can do.
> As someone building a particularly stupid car in a genre almost but not entirely unlike the OP (a turbo LS1-swapped Rover P5),
I have no idea why people do this stuff to a nice car like a Rover P5. It isn't my car though.
> Start somewhere, and the more you do, the more you can do.
Obviously. But I had to do a lot of stuff that I wasn't prepared to do far quicker because the previous person who doing this took short cuts. I almost had the dash catch fire because someone did a bodge job on electricals instead of paying £15 for the correct part (a plastic plug).
The point I was making is that you are making it sound far simpler than it actually is. There been a good few weekends that have been sunny and I have honestly felt like I was wasting my time and couldn't face working on it.
I had to fit a new turbo and it took me about 3-4 weeks. Not because it was difficult (actually it one of the easier and nicer jobs IMO), it was sourcing parts around the turbo such as gaskets, copper washer kits and other dumb stuff like that.
There was constant trips to tool shops because I was always missing like a tool, trying to find a fitting/gromit in Halfords (they never have it) or a parts supplier 40 miles away in the sticks. It all adds up in both time and cost.
Now I know roughly who the order from, what I should order from etc. But that is going to be different for almost different manufacturer and worse if the stuff is more niche/custom.
The amount of the projects that get given up, suggest it not that easy.
(I don't know why your comment got flagged. I vouched for it; whatever we might argue about here, I don't think you're out of line in any way.)
I actually feel everything you have said apart from this P5 being "nice" (it was fucked). Like turbo delays - I had that on my other project, and going from "I need a new turbo" to "I have a new turbo and things adjacent to the turbo" took damn near a year by itself. I know how this goes!
So I hope I did not appear to say that it's EASY. I've put in enough hours to know that it's not, and if it was everyone would be doing it anyway. It does in fact take a lot of time, and willingness to learn, and plain old determination, and money. I will say it's something that IS possible, and that I still agree with this:
> Honestly, just learn it like anything else.
But...I suppose we'll know that for sure once I have an actual working car, right? :)
If you're starting from 0 that's probably a decade long commitment before you're able to start to execute a project like this.
There's a youtube series 'project binky' where a pair of professional car tuners rebuild a mini cooper and stuff a Celica engine in it. They already have all the skills, own a shop and all the tools and it still took them years.
similarly, there's a youtube channel called Mighty Car Mods that does builds also and even the ones they "rush" can take months and thousands of work hours from people from multiple disciplines (body repair, paint, electrical work, tuning, etc.). Not cheap at all.
A decade would be very quick. The amount of specialist knowledge that went into every part of this project is crazy.. After a decade's worth of projects I doubt I'd be confident to tackle the steering and suspension design on something like this, let alone all the aero.
I've been working on cars for 20yr, I weld, I have done CAD/CAM/CAE stuff, rebuilt and modified engines, done custom suspension work... there are so many aspects of a project like this that are just completely unknown to me, like I wouldn't even know where to start. Many aspects of this build are not things you can really learn or research on your own.
More like financial engineering. When Wall Street demands ever increasing EPS growth, but revenues are flat or declining and you don't have cash to buy back shares, cost cutting is the only option. Unfortunately, labor costs are the bulk of most companies' costs, so that means layoffs. It would be nice if companies had a financial horizon that extended beyond the current quarter but that seems more and more like a pipe dream.
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.
I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.
With the amount of money Facebook was offering I could see them having a hard time refusing. If someone offered me 100 million dollars to work on AI I know I would have a hard time refusing.
Stated with no more evidence than the figure of $100M of compensation, which was started by Sam Altman on his brother's podcast. But surprisingly everyone seems to be entirely fine with this wild claim and not asking for proof.
Anthropic, frankly, needs to in ways the other big names don't.
It gets lost on people in techcentric fields because Claude's at the forefront of things we care about, but Anthropic is basically unknown among the wider populace.
Last I'd looked a few months ago, Anthropic's brand awareness was in the middle single digits; OpenAI/ChatGPT was somewhere around 80% for comparison. MS/Copilot and Gemini were somewhere between the two but closer to Open AI than Anthropic.
tl;dr - Anthropic has a lot more to gain from awareness campaigns than the other major model providers do.
Claude is ChatGPT done right. It's just better under any metric.
Of course OpenAI has tons of money and can branch off in all kind of directions (image, video, n8n clone, now RAG as a service).
In the end I think they will all be good enough and both Anthropic and OpenAI lead will evaporate.
Google will be left to win because they already have all the customers with the GSuite and OpenAI will be incorporated at massive loss in Microsoft, which is already selling to all the Azure customers.
>Anthropic feels like a one trick pony as most users dont need or want anthropic products.
I don't see what the basis for this is that wouldn't be equally true for OpenAI.
Anthropic's edge is that they very arguably have some of the best technology available right now, despite operating at a fraction of the scale of their direct competitors. They have to start building mind and marketshare if they're going to hold that position, though, which is the point of advertising.
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
We started by using Ink, and at this point it’s our own framework due to the number of changes we’ve made to it over the months. Terminal rendering is hard, and it’s less that we haven’t modified the renderer, and more that there is this pretty fundamental tradeoff with terminal rendering that we have been navigating.
Other terminal apps make different tradeoffs: for example Vim virtualizes scrolling, which has tradeoffs like the scroll physics feeling non-native and lines getting fully clipped. Other apps do what Claude Code does but don’t re-render scrollback, which avoids flickering but means the UI is often garbled if you scroll up.
As someone who's used Claude Code daily since the day it was released, the sentiment back then (sooo many months ago) was that the Agent CI coding TUIs were kind of experimental proof-of-concepts. We have seen them be incredibly effective and the CC team has continued to add features.
Tech debt isn't something that even experienced large teams are immune to. I'm not a huge TypeScript fan, so seeing their choice to run their app on Node to me felt like a trade-off between development speed with the experience that the team had and at the expense of long-term growth and performance. I regularly experience pretty intense flickering and rendering issues and high CPU usage and even crashes but that doesn't stop me from finding the product incredibly useful.
Developing good software especially in a format that is relatively revolutionary takes time to get right and I'm sure whatever efforts they have internally to push forward a refactor will be worth it. But, just like in any software development, refactors are prone to timeline slips and scope creep. A company having tons of money doesn't change the nature of problem-solving in software development.
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.
How about giving us the basic UX stuff that all other AI products have? I've been posting this ever since I first tried Claude: Let us:
* Sign in with Apple on the website
* Buy subscriptions from iOS In App Purchases
* Remove our payment info from our account before the inevitable data breach
* Give paying subscribers an easy way to get actual support
As a frequent traveller I'm not sure if some of those features are gated by region, because some people said they can do some of those things, but if that is true, then that still makes the UX worse than the competitors.
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
>It's entirely possible they don't have the ability in house to resolve it.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?
That's certainly an interesting observation. I wonder if they produce one client that has some kind of abstraction layer for the user interface & that abstraction layer has hidden or obscured this detail?
The novelty of LLMs are wearing off, people are beginning to understand them for what they are and what they are capable of, and performance has been plateauing. I think that's why people are starting to worry that the AI bubble is a repeat of the dotcom bubble, which was a similar technological revolution.
Did they even fail? Llama2 was groundbreaking for open source LLMs, it defined the entire space. Llama3 was a major improvement over Llama2. Just because Llama4 was underwhelming, it's silly to say they failed.
> Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights
It goes beyond just IP law compliance. Creativity is their core competency and competitive differentiator. If you replace that with AI slop, then your product becomes almost indistinguishable from that of everyone else producing AI slop.
IMO, they're striking exactly the right balance - use AI as a creative aid and productivity booster not something to make the critical aspects of the final product.
reply