Unfortunately while the intentions around agile were noble it's pretty much a direct way to burnout or worse. The human mind is not designed to "sprint" run a marathon, metaphorically speaking, forever.
I see older devs being active in the trade well into their 60s but even as I much younger person I don't see how agile development is sustainable for a ~50-year career.
The thing is, the core agile points from the manifesto are pretty much universally fine and can pretty much be boiled down to, "make changes fast, get feedback, gain more understanding faster".
Pretty much everything that's been layered on top though has either nothing to do with the manifesto, or actively breaks it. i.e. there's a burning issue, I'll get to that after my sprint commitment, which was sold to let me finish work, but now only exists to stress me out to squeeze more widgets per unit of time, where the widgets pretty much never actually map back to anything actually tangible.
Scrum is like Spaghetti Carbonara in America. The ingredients are simple and there's a tiny bit of technique involved that anybody can figure out after a few tries. For some reason though almost everybody that makes it decides that they know better than the people that invented it and so adulterates it with peas and onions and garlic and cream and cream cheese and Italian seasoning and parsley and chives until it ends up being Olive Garden Alfredo. If they wanted Carbonara then they would have cooked the Carbonara, not the waterfall with a bunch of JIRA workflows and four-hour meetings layered on top. They just did what they would have done anyway while attempting to sound fancy via obfuscation.
It's not just Agile but the same applies to DevOps.
DevOps is a culture. It can also be the specific subset of highly skilled individuals who were part of or an outcome of said cultures cross pollination. Today DevOps most often means fairly unskilled person hitting pipelines with hammer.
In the end, the same old people with the same old commercial interests adopted the term in a way that benefited them but changed the meaning of the term because change was not actually something anyone wanted.
That's not actually true. To give a silly example: does anyone (seriously) claim that 'true fascism' has never been tried?
Or: liberal democracy (I'm sure you can find a synonym that ends in *-ism) has been tried. It's been doing ok-ish. Obviously it has warts and all. But more importantly: approximately no one ever seriously claims that 'real liberal democracy' hasn't been tried.
Similar for constitutional monarchy, or 'social market economy', or dictatorships, etc. People can mostly agree that the real deal has been tried.
Any principle or practice falls apart when tied to an economy. See also: religion, politics, society at large. I don't foresee that changing in our lifetimes, so make money doing it the wrong way, and do it the right way in your free time.
I don’t agree with that though, plenty of places practice agile well. Maybe big corporations don’t practice it well, but startups often do agile correctly and understand the philosophy.
Inasmuch as Agile was adopted at companies, it's because it was sold to them as a way to provide greater transparency, accountability, and control into a chaotic software development process. The vice president behind the company's "Agile Transformation" probably can't even name point one of the manifesto; "we're doing Scrum with JIRA, therefore we're agile" is the extent of his concern.
You say that, but it was way more stressful pre agile with the fixed date, fuck around for a few months planning, then dev, run out of time then death march to finish, etc.
A lot of young guys like that D-Day style work, then goof off for a while, but not me. Continuous sustainable work is much preferred.
There's a difference between "bad PMs using the tool wrong" and "ordinary, human PMs using the tool in the way that common business incentives would lead you to predict they'll use it".
If the tool is used wrong most of the time, it's at least partially to blame.
"If the tool is used wrong most of the time, it's at least partially to blame."
Only if there were other tools that didn't fall victim to the same business incentives, but they all do.
I've had PMs that find a balance with the business incentives and make it work. If you’re human and make the wrong choices, then most people, including me, will likely call you bad in that context. If they can't stand up to find balance with the business incentives, then they're a bad PM. That doesn't make them a bad person.
I have seen Scaled Agile Framework (SAFe) work in multiple real program teams. Doing it successfully requires total commitment at multiple levels and many organizations are culturally incapable of making the transitions.
To be clear I am not claiming that SAFe is necessarily the best possible methodology. There is certainly room for improvement. But empirically it can work in real life.
The organizations I've seen do SAFe at the top level "coincidentally" have so much in the way of resources that if they do software at the group level at all, they do it like a gentleman farms.
"I mean if something doesn't ever work in real life then its not good."
Are you talking about Agile, Waterfall, or project management in general?
I've seen Agile work just fine. I've also seen it fail miserably. I've seen both of these at the same company with the main difference being how aggressive/delusional the leadership is. The easy test is if your leadership is legitimately ok with your team going home early if you complete your sprint commitment early, and it actually happens on occasion.
It feels like over the past decade or two, every company that builds software has gradually adopted the same terrible practices as is commonly found in game development studios.
That is why more advanced agile methodologies such as SAFe use the neutral term "iteration" rather than "sprint". It doesn't imply anything about team velocity or individual workload.
The term "iteration" was in common use by many of the big 90s proto-agile methodologies (think RUP et al). And XP - which I guess is what most people would regard as the first "true" form of agile - used it too.
SAFe is just an attempt to mush something that like looks like agile to delivery teams together with something that fits into more traditional program management, governance, and strategic direction lifecycle models.
There's no particular magic to it, and it's probably better to think of it in terms of being an "enterprise variant of agile" rather than a "humane variant of agile".
Yes, that's accurate. In the real world sometimes software programs have to make compromises in order to align with actual business needs. Like if you're going to be manufacturing hardware to go with the software or scheduling training for users then you have to apply strict program management discipline to ensure everything comes together at the correct time.
Sure, that's fair. In the real world there are a lot of developers who want to be told what to do, or need to be told what to do because they lack business domain knowledge. A defined methodology like SAFe allows large enterprises to move forward at a steady pace and get some productive work out of those people.
The reality is that in some domains there just aren't many developers who are highly motivated, self directed, and thoroughly understand customer needs. Those people just aren't widely available in the labor market regardless of wages or working conditions. So if management doesn't impose a fairly strict methodology then then the program will collapse.
I'd make a separate case that learned helplessness is a reversible thing, and more highly motivated and self directed devs can be grown.
But leadership has to incentivize not just being a ticket monkey, and needs to mindfully empower people. You can't just flip a switch in a feature factory and say "fly my pretties, be free!"
Sure, that's also fair. But it takes a long time to turn culture around. And in the meantime the company has to continue shipping releases to customers or else they run out of cash and everyone gets laid off.
In my experience of it though (only two workplaces, but not one) it's used for higher level planning, rather than being a 'more advanced agile'. I.e. a SAFe iteration spans some number of sprints greater than one. What are we going to deliver this quarter versus how are we going to break down and monitor progress of the quarter's deliverables week by week. (Don't read that like I like it.)
I think you're confused about the terminology. SAFe doesn't have sprints. Depending on the program planning horizon, several iterations can be grouped together into a larger program increment which typically lasts about a quarter.
Why is anything supposed to be sustainable for a ~50-year career? That’s a long time! Things change and people change.
It’s not like my great grandparents had a passion for farming in South Dakota and that’s why they did it until they dropped dead. It’s all they knew and what they did to survive.
If you gave them the option to tap on a keyboard in an air-conditioned room for 10 or 20 of those years they would’ve taken it.
Agile (or rather modern management) converts human capital into capital as fast as possible. Considering the endless supply of developers and lack of accountability there is no downsides to doing that, you are an externality.
Yup, got similar quotes. I'm really not going to pay that for a day's work (2 people). The price difference over installing A/C is staggering and don't know where it comes from.
That is insane. I paid 1000EUR for an install on two floors (two indoor units) plus a few hundred for extra copper pipe not included in the quote. Took two guys about 7h. At least an hour of that was figuring out how to get power to the unit with a big enough fuse (my bad)
It's an interesting case. IMO LLMs are not a product in the classical sense, companies like Anthropic are basically doing "basic research" so others can build products on top of it. Perhaps Anthropic will charge a royalty on the API usage. I personally don't think you can earn billions selling $500 subscriptions. This has been shown by the SaaS industry. But it is yet to be seen whether the wider industry will accept such royalty model. It would be akin to Kodak charging filmmakers based on the success of the movie. Somehow AI companies will need to build a monetization pipeline that will earn them a small amount of money "with every gulp", if we are using a soft drink analogy.
Not commenting on the topic at hand, but my goodness, what a beautiful blog. That drop cap, the inline comments on the right hand side that appear on larger screens, the progress bar, chef's kiss. This is how a love project looks like.
AGI in 5/10 years is similar to "we won't have steering wheels in cars" or "we'll be asleep driving" in 5/10 years. Remember that? What happened to that? It looked so promising.
I mean, in certain US cities you can take a waymo right now. It seems that adage where we overestimate change in the short term and underestimate change in the long term fits right in here.
That's not us though. That's a third party worth trillions of dollars that manages a tiny fleet of robot cars with a huge back-end staff and infrastructure, and only in a few cities covering only about 2-3% of us (in this one country.) We don't have steering wheel-less cars and we can't/shouldn't sleep on our commute to and from work.
I don't think anyone was ever arguing "not only are we going to develop self driving technology but we're going to build out the factories to mass produce self driving cars, and convince all the regulatory bodies to permit these cars, and phase out all the non-self driving vehicles already on the road, and do this all at a price point equal or less than current vehicles" in 5 to 10 years. "We will have self driving cars in 10 years" was always said in the same way "We will go to the moon in 10 years" was said in the early 60s.
The open (about the bet) is actually pretty reasonable, but some of the predictions listed include: passenger vehicles on American roads will drop from 247 million in 2020 to 44 million in 2030. People really did believe that self-driving was "basically solved" and "about to be ubiquitous." The predictions were specific and falsifiable and in retrospect absurd.
I meant serious predictions. A surprisingly large percentage of people claim the Earth is flat, of course there's going to be baseless claims that the very nature of transportation is about to completely change overnight. But the people actually familiar with the subject were making dramatically more conservative and I would say reasonable predictions.
What Waymo and others are doing is impressive, but it doesn't seem like it will globally generalize. Does it seem like that system can be deployed in chaotic Mumbai, old European cities, or unpaved roads? It requires clear, well maintained road infrastructure and seems closer to "riding on rails" than "drive yourself anywhere".
"Achieving that goal necessitates a production system supporting it" is very different from "If the control system is a full team in a remote location, this vehicle is not autonomous at all" which was what GP said.
I read GP as saying Waymo does indeed have self driving cars, but that doesn't count because such cars are not available for the average person to purchase and operate.
Waymo cars aren't being driven by people at a remote location, they legitimately are autonomous.
Of course. My point being "AI is going to take dev jobs" is very much like saying "Self driving will take taxi driver jobs". Never happened and likely won't happen or on a very, very long time scale.
To me, it's weird to call it "PhD-level". That, to me, means to be able to take in existing information on a certain very niche area and able to "push the boundary". I might be wrong but to date I've never seen any LLM invent "new science", that makes PhD, really PhD. It also seems very confusing to me that many sources mention "stone age" and "PhD-level" in the same article. Which one is it?
People seem to overcomplicate what LLM's are capable of, but at their core they are just really good word parsers.
Most of the phd’s I know are studying things that I guarantee GPT-5 doesn’t know about… because they’re researching novel stuff.
Also, LLMs don’t have much consistency with how well they’re able to apply the knowledge that they supposedly have. Hence the “lots of almost correct code” stereotype that’s been going around.
I was using the fancy new Claude model yesterday to debug some fast-check tests (quickcheck-inspired typescript lib). Claude could absolutely not wrap its head around the shrinking behavior, which rendered it useless for debugging
I would say, any company who doesn't have their own AI developed. You always hear companies "mandating" AI usage, but for the most part it's companies developing their own solutions/agents. No self-respecting company with a tight opsec would allow a random "always-online" LLM that could just rip your codebase either piece by piece or the whole thing at once if it's a IDE addon (or at least I hope that's the case). So yeah, I'd say locally deployed LLM's/Agents are a gamechanger.
I see older devs being active in the trade well into their 60s but even as I much younger person I don't see how agile development is sustainable for a ~50-year career.