Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> at least for now.

I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.

Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.

My thinking here is that we already had the technologies of the LLMs and the compute, but we hadn't yet had the reason and capital to deploy it at this scale.

So the surprising innovation of transformers did not give us the boost in capability itself, it still needed scale. The marketing that enabled the capital, that enables that scale was what caused the insane growth, and capital can't grow forever, it needs returns.

Scale has been exponential, and we are hitting an insane amount of capital deployment for this one technology that, has yet to prove commercially viable at the scale of a paradigm shift.

Are businesses that are not AI based, actually seeing ROI on AI spend? That is really the only question that matters, because if that is false, the money and drive for the technology vanishes and the scale that enables it disappears too.



> Yes it advanced extremely quickly, but that is not a confirmation of anything. It could just be the technology quickly meeting us at either our limit of compute, or it's limit of capability.

To comment om this, because its the most common counter argument. Most technology has worked in steps. We take a step forward, then iterate on essentially the same thing. It's very rare we see order of magnitude improvement on the same fundamental "step".

Cars were quite a step forward from donkeys, but modern cars are not that far off from the first ones. Planes were an amazing invention, but the next model of plane is basically the same thing as the first one.


I agree, I think we are in the latter phase already. LLMs were a huge leap in machine learning, but everything after has been steps on top + scale.

I think we would need another leap to actually meet the markets expectations on AI. The market is expecting AGI, but I think we are probably just going to do incremental improvements for language and multi modal models from here, and not meet those expectations.

I think the market is relying on something that doesn't currently exist to become true, and that is a bit irrational.


Transformers aren't it, though. We need a new fundamental architecture and, just like every step forward in AI that came before, when that happens is a completely random event. Some researcher needs to wake up with a brilliant idea.

The explosion of compute and investment could mean that we have more researchers available for that event to happen, but at the same time transformers are sucking up all the air in the room.


Several people hinted at the limits this technology was about to face, including training data and compute. It was obvious it had serious limits.

Despite the warnings, companies insisted on marketing superintelligence nonsense and magic automatic developers. They convinced the market with disingenous demonstrations, which, again, were called out as bullshit by many people. They are still doing it. It's the same thing.


> Yes it advanced extremely quickly

The things that impress me about gpt-5 are basically the same ones that impressed me about gpt-3. For all the talk about exponential growth, I feel like we experienced one big technical leap forward and have spent the past 5 years fine-tuning the result—as if fiddling with it long enough will turn it into something it is not.


When building their LLMs, the model makers consumed the entire internet. This allowed the models to improve exponentially fast. But there's no more internet to consume. Yes, new data is being generated, but not at anywhere near the rate the models were growing in capability just a year ago. That's why we're seeing diminishing returns when comparing, say, GPT-5 to GPT-4.

The AI marketers, accelerationists and doomers may seem to be different from one another, but the one thing they have in common is their adherence to an extrapolationist fallacy. They've been treating the explosion of LLM capabilities as a promise of future growth and capability, when in fact it's all an illusion. Nothing achieves indefinite exponential growth. Everything hits a wall.


> Yes it advanced extremely quickly,

It did but it's kinda stagnated now especially on the LLM front. The time when ever week a groundbreaking model came out is over for now. Later revisions of existing models, like GPT5 and llama4 have been underwhelming.


GPT5 may have been underwhelming to _you_. Understand that they're heavily RLing to raise the floor on these models, so they might not be magically smarter across the board, there are a LOT of areas where they're a lot better that you've probably missed because they're not your use case.


every time i say "the tech seems to be stagnating" or "this model seems worse" based on my observations i get this response. "well, it's better for other use cases." i have even heard people say "this is worse for the things i use it for, but i know it's better for things i don't use it for."

i have yet to hear anyone seriously explain to me a single real-world thing that GPT5 is better at with any sort of evidence (or even anecdote!) i've seen benchmarks! but i cannot point to a single person who seems to think that they are accomplishing real-world tasks with GPT5 better than they were with GPT4.

the few cases i have heard that venture near that ask may be moderately intriguing, but don't seem to justify the overall cost of building and running the model, even if there have been marginal or perhaps even impressive leaps in very narrow use cases. one of the core features of LLMs is they are allegedly general-purpose. i don't know that i really believe a company is worth billions if they take their flagship product that can write sentences, generate a plan, follow instructions and do math and they are constantly making it moderately better at writing sentences, or following instructions, or coming up with a plan and it consequently forgets how to do math, or becomes belligerent, or sycophantic, or what have you.

to me, as a user with a broad range of use cases (internet search, text manipulation, deep research, writing code) i haven't seen many meaningful increases in quality of task execution in a very, very long time. this tracks with my understanding of transformer models, as they don't work in a way that suggests to me that they COULD be good at executing tasks. this is why i'm always so skeptical of people saying "the big breakthrough is coming." transformer models seem self-limiting by merit of how they are designed. there are features of thought they simply lack, and while i accept there's probably nobody who fully understands how they work, i also think at this point we can safely say there is no superintelligence in there to eke out and we're at the margins of their performance.

the entire pitch behind GPT and OpenAI in general is that these are broadly applicable, dare-i-say near-AGI models that can be used by every human as an assistant to solve all their problems and can be prompted with simple, natural language english. if they can only be good at a few things at a time and require extensive prompt engineering to bully into consistent behavior, we've just created a non-deterministic programming language, a thing precisely nobody wants.


The simple explanation for all this, along with the milquetoast replies kasey_junk gave you, is that to its acolytes, AI and LLMs cannot fail, only be failed.

If it doesn't seem to work very well, it's because you're obviously prompting it wrong.

If it doesn't boost your productivity, either you're the problem yourself, or, again, you're obviously using it wrong.

If progress in LLMs seems to be stagnating, you're obviously not part of the use cases where progress is booming.

When you have presupposed that LLMs and this particular AI boom is definitely the future, all comments to the contrary are by definition incorrect. If you treat it as a given that this AI boom will succeed (by some vague metric of "success") and conquer the world, skepticism is basically a moral failing and anti-progress.

The exciting part about this belief system is how little you actually have to point to hard numbers and, indeed, rely on faith. You can just entirely vibe it. It FEELS better and more powerful to you, your spins on the LLM slot machine FEEL smarter and more usable, it FEELS like you're getting more done. It doesn't matter if those things are actually true over the long run, it's about the feels. If someone isn't sharing your vibes about the LLM slot machine, that's entirely their fault and problem.


And on the other side, to detractors, AI and LLMs cannot ever succeed. There's always another goalpost to shift.

If it seems to work well, it's because it's copying training data. Or it sometimes gets something wrong, so it's unreliable.

If they say it boosts their productivity, they're obviously deluded as to where they're _really_ spending time, or what they were doing was trivial.

If they point to improvements in benchmarks, it's because model vendors are training to the tests, or the benchmarks don't really measure real-world performance.

If the improvements are in complex operations where there aren't benchmarks, their reports are too vague and anecdotal.

The exciting part about this belief system is how little you have to investigate the actual products, and indeed, you can simply rely on a small set of canned responses. You can just entirely dismiss reports of success and progress; that's completely due to the reporter's incompetence and self-delusion.


I work in a company that's "all in on AI" and there's so much BS being blown up just because they can't have it fail because all the top dogs will have mud on their faces. They're literally just faking it. Just making up numbers, using biased surveys, making sure employees know it's being "appreciated" if they choose option A "Yes AI makes me so much more productive" etc.

This is definitely something that biases me against AI, sure. Seeing how the sausage is made doesn't help. Because it's really a lot of offal right now especially where I work.

I'm a very anti-corporate non-teamplayer kinda person so I tend to be highly critical, I'll never just go along with PR if it's actually false. I won't support my 'team' if it's just wrong. Which often rubs people the wrong way at work. Like when I emphasised in a training that AI results must be double checked. Or when I answered in an "anonymous" survey that I'd rather have a free lunch than "copilot" and rated it a 2 out of 5 in terms of added value (I mean, at the time it didn't even work in some apps)

But I'm kinda done with soul-killing corporatism anyway. Just waiting for some good redundancy packages when the AI bubble collapses :)


> If they say it boosts their productivity, they're obviously deluded as to where they're _really_ spending time, or what they were doing was trivial.

A pretty substantial number of developers are doing trivial edits to business applications all over the globe, pretty much continuously. At least in the low to mid double digits %


wouldn't call myself a detractor. i wouldn't call it a belief system i hold (i am an engineer 20 years into my career and would love to automate away the tedious parts of my job i've done a thousand times) as it is a position i hold based on the evidence i've seen in front of me.

i constantly hear that companies are running with "50% of their code written by AI!" but i've yet to meet an engineer who says they've personally seen this. i've met a few who say they see it through internal reporting, though it's not the case on their team. this is me personally! i'm not saying these people don't exist. i've heard it much more from senior leadership types i've met in the field - directors, vps, c-suite, so on.

i constantly hear that AI can do x, y, or z, but no matter how many people i talk to or how much i or my team works towards those goals, it doesn't really materialize. i can accept that i may be too stupid (though i'd argue that if that's the problem, the AI isn't as good as claimed) but i work with some brilliant people and if they can't see results, that means something to me.

i see people deploying the tool at my workplace, and recently had to deal with a situation where leadership was wondering why one of our top performers had slowed down substantially and gotten worse, only to find that the timeline exactly aligned with them switching to cursor as their IDE.

i read papers - lots of papers - and articles about both positive and negative assertions about LLMs and their applicability in the field. i don't feel like i've seen compelling evidence in research not done by the foundation model companies that supports the theory this is working well. i've seen lots of very valid and concerning discoveries reported by the foundation model companies, themselves!

there are many places in the world i am a hardliner on no generative AI and i'll be open about that - i don't want it in entertainment, certainly not in music, and god help me if i pick up the phone and call a company and an agent picks up.

for my job? i'm very open to it. i know the value i provide above what the technology could theoretically provide, i've written enough boilerplate and the same algorithms and approaches for years to prove to myself i can do it. if i can be as productive with less work, or more productive with the same work? bring it on. i am not worried about it taking my job. i would love it to fulfill its promise.

i will say, however, that it is starting to feel telling that when i lay out any sort of reasoned thought on the issue that (hopefully) exposes my assumptions, biases, and experiences, i largely get vague, vibes-based answers, unsourced statistics, and responses that heavily carry the implication that i'm unwilling to be convinced or being dogmatic. i very rarely get thoughtful responses, or actual engagement with the issues, concerns, or patterns i write about. oftentimes refutations of my concerns or issues with the tech are framed as an attack on my willingness to use or accept it, rather than a discussion of the technology on its merits.

while that isn't everything, i think it says something about the current state of discussion around the technology.


You really thought you had a post with this one huh. I have second-hand embarrassment for you.


Claude Sonnet 4.5 is _way_ better than previous sonnets and as good as Opus for the coding and research tasks I do daily.

I rarely use Google search anymore, both because llms got that ability embedded and the chatbots are good at looking through the swill search results have become.


"it's better at coding" is not useful information, sorry. i'd love to hear tangible ways it's actually better. does it still succumb to coding itself in circles, taking multiple dependencies to accomplish the same task, applying inconsistent, outdated, or non-idiomatic patterns for your codebase? has compliance with claude.md files and the like actually improved? what is the round trip time like on these improvements - do you have to have a long conversation to arrive at a simple result? does it still talk itself into loops where it keeps solving and unsolving the same problems? when you ask it to work through a complex refactor, does it still just randomly give up somewhere in the middle and decide there's nothing left to do? does it still sometimes attempt to run processes that aren't self-terminating to monitor their output and hang for upwards of ten minutes?

my experience with claude and its ilk are that they are insanely impressive in greenfield projects and collapse in legacy codebases quickly. they can be a force multiplier in the hands of someone who actually knows what they're doing, i think, but the evidence of that even is pretty shaky: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

the pitch that "if i describe the task perfectly in absolute detail it will accomplish it correctly 80% of the time" doesn't appeal to me as a particularly compelling justification for the level of investment we're seeing. actually writing the code is the simplest part of my job. if i've done all the thinking already, i can just write the code. there's very little need for me to then filter that through a computer with an overly-verbose description of what i want.

as for your search results issue: i don't entirely disagree that google is unusable, but having switched to kagi... again, i'm not sure the order of magnitude of complexity of searching via an LLM is justified? maybe i'm just old, but i like a list of documents presented without much editorializing. google has been a user-hostile product for a long time, and its particularly recent quality collapse has been well-documented, but this seems a lot more a story of "a tool we relied on has gotten measurably worse" and not a story of "this tool is meaningfully better at accomplishing the same task." i'll hand it to chatgpt/claude that they are about as effective as google was at directing me to the right thing circa a decade ago, when it was still a functional product - but that brings me back to the point that "man, this is a lot of investment and expense to arrive at the same result way more indirectly."


You asked for a single anecdote of llms getting better at daily tasks. I provided two. You dismissed them as not valuable _to you_.

It’s fine that your preferences aren’t aligned such that you don’t value the model or improvements that we’ve seen. It’s troubling that you use that to suggest there haven’t been improvements.


you didn't provide an anecdote. you just said "it's better." an anecdote would be "claude 4 failed in x way, and claude 4.5 succeeds consistently." "it is better" is a statement of fact with literally no support.

the entire thrust of my statement was "i only hear nonspecific, vague vibes that it's better with literally no information to support that concept" and you replied with two nonspecific, vague vibes. sorry i don't find that compelling.

"troubling" is a wild word to use in this scenario.


My one shot rate for unattended prompts (triggered GitHub actions) has gone from about 2 in 3 to about 4 in 5 with my upgrade to 4.5 in the codebase I program in the most (one built largely pre-ai). These are highly biased to tasks I expect ai to do well.

Since the upgrade I don’t use opus at all for planning and design tasks. Anecdotally, I get the same level of performance on those because I can choose the model and I don’t choose opus. Sonnet is dramatically cheaper.

What’s troubling is that you made a big deal about not hearing any stories of improvements as if your bar was very low for said stories, then immediately raised the bar when given them. It means that one doesn’t know what level of data you actually want.


Requesting concrete examples isn’t a high bar. Autopilot got better tells me effectively nothing. Autopilot can now handle stoplights does.


“ but i cannot point to a single person who seems to think that they are accomplishing real-world tasks with GPT5 better than they were with GPT4.”

I don’t use OpenAI stuff but I seem to think Claude is getting better for accomplishing the real world tasks I ask of it.


Specifics are worth talking about. I just felt it unfair to complain about raising the bar when you didn’t initially reach it.

In your own worlds: “You asked for a single anecdote of llms getting better at daily tasks.”

Which is already less specific than their request: “i'd love to hear tangible ways it's actually better.”

Saying “is getting better for accomplishing the real world tasks I ask of it” brings nothing to a discussion and was the kind of vague statement that they were initially complaining about. If LLM’s are really improving it’s not a major hurdle to say something meaningful about what specific is getting better. /tilting at windmills


Here's one. I have a head to head "benchmark" involving generating a React web app to display a Gantt chart, add tasks, layer overlaps, read and write to files, etc. I compared implementing this application using both Claude Code with Opus 4.1 / Sonnet 4 (scenario 1) and Claude Code 2 with Sonnet 4.5 (scenario 2) head to head.

The scenario 1 setup could complete the application but it had about 3 major and 3 minor implementation problems. Four of those were easily fixed by pointing them out, but two required significant back and forth with the model to resolve.

The scenario 2 setup completed the application and there were four minor issues, all of which were resolved with one or two corrective prompts.

Toy program, single run through, common cases, stochastic parrot, yadda yadda, but the difference was noticeable in this direct comparison and in other work I've done with the model I see a similar improvement.

Take from that what you will.


so to clarify your case, you are having it generate a new application, from scratch, and then benchmarking the quality of the output and how fast it got to the solution you were seeking?

i will concede that in this arena, there does seem to be meaningful improvement.

i said this in one of my comments in this thread, but the place i routinely see the most improvement in output from LLMs (and find they perform best) for code generation is in green field projects, particularly ones whose development starts with an agent. some facts that make me side-eye this result (not yours in particular, just any benchmark that follows this model):

- the codebase, as long as a single agent and model are working on it, is probably suited to that model's biases and thus implicitly easier for it to work in and "understand."

- the codebase is likely relatively contained and simple.

- the codebase probably doesn't cross domains or require specialized knowledge of services or APIs that aren't already well-documented on the internet or weren't built by the tool.

these are definitely assumptions, but i'm fairly confident in their accuracy.

one of the key issues i've had approaching these agents is that all my "start with an LLM and continue" projects actually start incredibly impressively! i was pretty astounded even on the first version of claude code - i had claude building a service, web management interface AND react native app, in concert, to build an entire end to end application. it was great! early iteration was fast, particularly in the "mess around and find out what happens" phase of development.

where it collapsed, however, was when the codebase got really big, and when i started getting very opinionated about outcomes. my claude.md file grew and grew and seemed to enforce less and less behavior, and claude became less and less likely to successfully refactor or reuse code. this also tracks with my general understanding of what an LLM may be good or bad at - it can only hold so much context, and only as textual examples, not very effectively as concepts or mental models. this ultimately limits its ability to reason about complex architecture. it rapidly became faster for me to just make the changes i envisioned, and then claude became more of a refactoring tool that i very narrowly applied when i was too lazy to do the text wrangling myself.

i do believe that for rapid prototyping - particularly the case of "product manager trying to experiment and figure out some UX" - these tools will likely be invaluable, if they can remain cost effective.

the idea that i can use this, regularly, in the world of "things i do in my day-to-day job" seems a lot more far fetched, and i don't feel like the models have gotten meaningfully better at accomplishing those tasks. there's one notable exception of "explaining focused areas of the code", or as a turbo-charged grep that finds the area in the codebase where a given thing happens. i'd say that the roughly 60-70% success rate i see in those tasks is still a massive time savings to me because it focuses me on the right thing and my brain can fill in the rest of the gaps by reading the code. still, i wouldn't say its track record is phenomenal, nor do i feel like the progress has been particularly quick. it's been small, incremental improvements over a long period of time.

i don't doubt you've seen an improvement in this case (which is, as you admit, a benchmark) but it seems like LLMs keep performing better on benchmarks but that result isn't, as far as i can see, translating into improved performance on the day-to-day of building things or accomplishing real-world tasks. specifically in the case of GPT5, where this started, i have heard very little if any feedback on what it's better at that doesn't amount to "some things that i don't do." it is perfectly reasonable to respond to me that GPT5 is a unique flop, and other model iterations aren't as bad, in that case. i accept this is one specific product from one specific company - but i personally don't feel like i'm seeing meaningful evidence to support that assertion.


Thank you for the thoughtful response. I really appreciate the willingness to discuss what you've seen in your experience. I think your observations are pretty much exactly correct in terms of where agents do best. I'd qualify in just a couple areas:

1. In my experience, Claude Code (I've used several other models and tools, but CC performs the best for me so that's my go-to) can do well with APIs and services that are proprietary as long as there's some sort of documentation for them it can get to (internal, Swagger, etc.), and you ensure that the model has that documentation prominently in context.

2. CC can also do well with brownfield development, but the scope _has_ to be constrained, either to a small standalone program or a defined slice of a larger application where you can draw real boundaries.

The best illustration I've seen of this is in a project that is going through final testing prior to release. The original "application" (I use the term loosely) was a C# DLL used to generate data-driven prescription monitoring program reporting.

It's not ultra-complicated but there's a two step process where you retrieve the report configuration data, then use that data to drive retrieval and assembly of the data elements needed for the final report. Formatting can differ based on state, on data available (reports with no data need special formatting), and on whether you're outputting in the context of transmission or for user review.

The original DLL was written in a very simplistic way, with no testing and no way to exercise the program without invoking it from its link points embedded in our main application. Fixing bugs and testing those fixes were both very painful as for production release we had to test all 50 states on a range of different data conditions, and do so by automating the parent application.

I used Claude Code to refactor this module, add DI and testing, and add a CLI that could easily exercise the logic in all different supported configurations. It took probably $50 worth of tokens (this was before I had a Max account, so it was full price) over the course of a few hours, most of which time I was in other meetings.

The final result did exhibit some classic LLM problems -- some of the tests were overspecific, it restructured without always fully cleaning up the existing functions, and it messed up a couple of paths through the business logic that I needed to debug and fix. But it easily saved me a couple days of wrestling with it myself, as I'm not super strong with C#. Our development teams are fully committed, and if I hadn't used CC for this it wouldn't have gotten done at all. Being able to run this on the side and get a 90% result I could then take to the finish line has real value for us, as the improved testing alone will see an immediate payback with future releases.

This isn't a huge application by any means, but it it's one example of where I've seen real value that is hitting production, and seems representative of a decently large category of line-of-business modules. I don't think there's any reason this wouldn't replicate on similarly-scoped products.


The biggest issue with Sonnet 4.5 is that it's chatty as fuuuck. It just won't shut up, it keeps producing massive markdown "reports" and "summaries" of every single minor change, wasting precious context.

With Sonnet 4 I rarely ran out of quota unexpectedly, but 4.5 chews through whatever little Anthropic gives us weekly.


Gpt5 isn't an improvement to me, but Claude sonnet4.5, handle terragrunt way, way better than the previous version did. It also go search AWS documentation by itself, and parse external documents way better. That's not LLM improvement, to be clear (except the terragrunt thing), I think it's improvement in data acquisition and a better inference engine. On react project it seems way, way less messy also, I have to use it more but the inference engine seems clearer. At least less prone to circular code, where it's stuck in a loop. It seems to be exiting the loop faster, even when the output isn't satisfactory (which isn't an issue to me, most of my prompt have more or less 'only write functions template, do not write the inside logic if it has to contain more than a loop', I fill the blanks myself)


I’m curious what you are expecting when you say progress has stagnated?


>> The marketing that enabled the capital, that enables that scale was what caused the insane growth, and capital can't grow forever,

Striking parallels between AI and food delivery (uber eats, deliveroo, lieferando, etc.) ... burn capital for market share/penetration but only deliver someone else's product with no investment to understand the core market for the purpose of developing a better product.


> I know I could be eating my words, but there is basically no evidence to suggest it ever becomes as exceptional as the kingmakers are hoping.

??? It has already become exceptional. In 2.5 years (since chatgpt launched) we went from "oh, look how cute this is, it writes poems and the code almost looks like python" to "hey, this thing basically wrote a full programming language[1] with genz keywords, and it mostly works, still has some bugs".

I think the goalpost moving is at play here, and we quickly forget how 1 year makes a huge difference (last year you needed tons of glue and handwritten harnesses to do anything - see aider) and today you can give them a spec and get a mostly working project (albeit with some bugs), 50$ later.

[1] - https://github.com/ghuntley/cursed


I don't disagree with you on the technology, but mostly my comment is about what the market is expecting. With such a huge capex expenditure it is expecting a huge returns. Given AI has not proven consistent ROI generally for other enterprises (as far as I know), they are hoping for something better than what is right now and they are hoping for it to happen before the money runs out.

I am not saying it's impossible, but there is no evidence that the leap in technology to reach wild profitability (replacing general labour) such investment desires is just around the corner either.


After 3 years, I would like to see pathways.

Let say we found a company that already realized 5-10% of savings in the first step. Now, based on this we might be able to map out the path to 25-30% savings in 5% steps for example.

I personally haven’t seen this, but I might have missed it as well.


Three years? One year ago I tried using LLMs for coding and found it to be more trouble than it was worth, no benifit in time spent or effort made. It's only within the past several months that this gas changed, IMHO.


To phrase this another way, using old terms: We seem to be approaching the uncanny valley for LLMs, at which point the market overall will probably hit the trough of disillusionment.


It doesn't really matter what the market is expecting at this point, the president views AI supremacy as non-negotiable. AI is too big to fail.


It’s true, but not just the presidency. The whole political class is convinced that this is the path out of all their problems.


...Is it the whole political class?

Or is it the whole political party?


I am not from the US, but your administration could still fumble the AI bust even if it wants to avoid it. Who knows maybe they are hoping to short it.


That there is a bubble is absolutely certain. If for no other reason, than because investors don't understand the technology and don't know which companies are for real and which are essentially scams, they dump money into anything with the veneer of AI and hope some of it sticks. We're replaying the dotcom bubble, a lot of people are going to get burned, a lot of companies will turn out to be crap. But at the end of the dotcom crash we had some survivors standing above the rest and the whole internet thing turned out to have considerable staying power. I think the same will happen with AI, particularly agentic coding tools. The technology is real and will stick with us, even after the bubble and crash.


I feel like the invention of MCP was a lot more instrumental to that than model upgrades proper. But look at it as a good thing, if you will: it shows that even if models are plateauing, there's a lot of value to unlock through the tooling.


> it shows that even if models are plateauing,

The models aren't plateauing (see below).

> invention of MCP was a lot more instrumental [...] than model upgrades proper

Not clear. The folks at hf showed that a minimal "agentic loop" in 100 LoC [1] that gives the agent "just bash access" still got very close to SotA with all the bells and whistles (and surpassed last year models w/ handcrafted harnesses).

[1] - https://github.com/SWE-agent/mini-swe-agent


Small focused (local) model + tooling is the future, not online LLMs with monthly costs. Your coding model doesn't need all of the information in the world built in, it needs to know code and have tools available to get any information it needs to complete its tasks. We have treesitter, MCPs, LSPs, etc - use them.

The problem is that all the billions (trillions?) of VC money go to the online models because they're printing money at this point.

There's no money to be made in creating models people can run locally for free.


I mean, that's still proving the point that tooling matters. I don't think his point was "MCP as a technology is extraordinary" because it's not.


MCP is a marketing ploy, not an “invention”.


It is an actual invention that has concrete function, whether or not it was part of a marketing push.


I didn't realize generating the gen-z programming language was a goalpost in the first place


The question in your last paragraph is not the only one that matters. Funding the technology at a material loss will not be off the table. Think about why.


Just tell us why you think funding at a loss at this scale is viable, don’t smugly assign homework


Apologies, not meant to be smug


...But you did fully intend to assign homework? Why are you even commenting, what are you adding?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: