It works fine for webapps and other slop-adjacent projects.
If you try to do anything outside of typical n-tiered apps (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller) it all falls apart very very quickly.
If the protocol is even slightly complex then the docs/reqs won't fit in the context with the code. Bootstrapping / initial bring-up of a protocol should be really easy but Claude struggles immensely.
> (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller)
I have had an AI assistant reverse engineer a complex TCP protocol (3-simultaneous connections each with a different purpose, all binary stuff) from a bunch of PCAPs and then build a working Python server to speak that protocol to a 20-year-old Windows XP client. Granted, it took two tries: Claude Opus 4.1 (this was late September) was almost up to the task, but kept making small mistakes in its implementation that were getting annoying. So I started fresh with Codex CLI and GPT-5.1-Codex had a working version in a couple hours. Model and tool quality can have a huge impact on this stuff.
The sloppier a web app is, the more CSS frameworks are fighting for control of every pixel, and simply deleting 500,000 files to clear out your node_modules brings Windows to its knees.
On the other hand, anything you can fit in a small AVR-8 isn't very big.
Yep, but I don’t intend to let that happen to my web app! It’s not that big and I intend to keep it that way.
Dependencies are minimal. There’s no CSS framework yet and it’s a little messy, but I plan to do an audit of HTML tag usage, CSS class usage, and JSX component usage. We (the coding agent and I) will consider whether Tailwind or some other framework would help or not. I’ll ask it to write a design doc.
I’m also using Deno which helps.
Greenfield personal projects can be fun. It’s tough to talk about programming in the abstract when projects vary so much.
I've been working with an agent to make a web-based biofeedback "application" which is really a toolbox of components you can slap together to support
- heart rate via Polar H10
- respiration rate via strap-on device
- GSR and EMG via arduino + web serial
- radar-based respiration (SOTA says you can get R-R intervals as good as the H10 if you're not moving)
and even do things like a 2 player experience. The code is beautiful, pure CSS the way it was supposed to be, visualizations with D3.js. I do "npm install" and can't get over the 0 vulnerability count. It's coding with React that's 100% fun with none of the complaints I usually have.
Given the amount of Arduino code that existed at the time LLM's were trained, I would have to agree that AVR-8 might be fine. For now it's on the Cortex-M struggle bus.
2026 is starting with half-baked NVidia drivers and missing functionality on linux? I am so surprised... did you try 17 different previous versions to get it running in true NV-Linux fashion?
This stuff has been flawless on AMD systems for a while a couple of years now, with the exception of the occasional archaic app that only runs on X11 (thus shoved in a container).
Flawless on AMD? Absolutely not. 2-3 years ago there used to be a amdgpu bug that froze my entire laptop randomly with no recourse beyond the 4 second power button press. After that was fixed, it sometimes got stuck on shutdown. Now it doesn't do that randomly anymore, but yet all it takes to break it, is to turn off the power to my external monitor (or the monitor powering off by itself to save energy) or unplugging it, after which it can no longer be used without rebooting and then sometimes it gets stuck on shutdown.
Clarification: The AMD iGPU driver (or Chrome) on Ubuntu 24.04 has bugs on your hardware. You could try a newer and different distro (just using a live-USB) to see if that has been fixed.
> It's puzzling to me why some still don't understand the systemic incentives...
Then I guess you're the type who will be really surprised to learn that with diminishing rewards comes increasing consolidation.
> ... that make all this work as it has for 16 years and counting...
That's convenient way to memory hole the market flash crashes, network forks, the blocks mined without consensus, and everything bad that happened over that timeframe.
The OP is hardly anywhere near as sensational as the latest AI generated github something-or-another typically posted here. I found the article extremely useful and would not be aware that it effected MORE THAN ONE product line. Please don't let @dang bury this IMO. If you have an alternative URL please post it!
He's not intentionally sensationalist, he's just flat out wrong. An uninformed piece like this does not belong here IMO without heavy context provided front and center.
Not only that, but the symptoms for hypoglycemia do change over your life, so that what is felt today (e.g. excessive sweating, blurred vision) may be totally different in the future (e.g. confusion, tingling thighs). Or you lose that sense of feeling entirely and never notice a problem until it's way too late to easily remedy.
The existing online mass is what attracted the VC in the first place, same as it ever was. It was mostly privately funded and very much a confederacy (AOL vs Prodigy vs BBS) at the time, much like now.
If you try to do anything outside of typical n-tiered apps (e.g. implement a well documented wire protocol with several reference implementations on a microcontroller) it all falls apart very very quickly.
If the protocol is even slightly complex then the docs/reqs won't fit in the context with the code. Bootstrapping / initial bring-up of a protocol should be really easy but Claude struggles immensely.
reply