Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From Dan Luu (https://danluu.com/wat/):

> When I joined this company, my team didn't use version control for months and it was a real fight to get everyone to use version control. Although I won that fight, I lost the fight to get people to run a build, let alone run tests, before checking in, so the build is broken multiple times per day. When I mentioned that I thought this was a problem for our productivity, I was told that it's fine because it affects everyone equally. Since the only thing that mattered was my stack ranked productivity, so I shouldn't care that it impacts the entire team, the fact that it's normal for everyone means that there's no cause for concern.

Do not underestimate the ability of developers to ignore good ideas. I am not going to argue that AI is as good as version control. Version control is a more important idea than AI. I sometimes want to argue it's the most important idea in software engineering.

All I'm saying is that you can't assume that good ideas will (quickly) win. Your argument that AI isn't valuable is invalid, whether or not your conclusion is true.

P.S. Dan Luu wrote that in 2015, and it may have been a company that he already left. Version control has mostly won. Still, post 2020, I talked to a friend, whose organization did use git, but their deployed software still didn't correspond to any version checked into git, because they were manually rebuilding components and copying them to production servers piecemeal.



All true, but the argument for AI is that it makes you far more productive as an individual, which if true should be an easy sell. In fact, some developers are quite committed to it, with a fervor I've not seen since the "I'm never going back to the office" fervor a few years ago. Version control is more of a "short term pain for long term gain" kind of concept; it is not surprising some people were hard to convince. But "AI" promises increased productivity as an individual, in the here and now. It should not be a hard sell if people found it to work as advertised.


> it makes you far more productive as an individual, which if true should be an easy sell

Writing unit tests where needed makes you more productive in the long run. Writing in modern languages makes you more productive. Remember how people writing assembly thought compiled languages would rot your brain!

But people just resist change and new ways of doing things. They don't care about actual productivity, they care about feeling productive with the tools they already know.

It's a hard sell when an application moves a button! People don't like change. Change is always a hard sell to a lot of people, even when it benefits them.


To the contrary, people resist change for good reasons: changes to tools rob attention and focus from the work, often for completely arbitrary or decorative reasons. Sometimes changes remove or break important aspects of the tool and force someone to waste time developing a new workflow which is, on average, no better than the previous one. It is vanishingly rare that the software team making the changes in question did sufficiently rigorous testing to show that the new version is a net "benefit" for most users of the software; they don't have time for that. All too often, no significant group of users was even consulted about the changes, which were made for reasons like advancing someone's career ("shipped X feature changes") or looking different for the sake of marketing something merely re-arranged as new ("the old style was so 2018").

The teams making changes to software are, on average, moderately worse than the teams who originally developed the software, if only because they missed out on the early development experience, and often don't fully understand the context and reasons for the original design and don't reason from first principles when making updates, but copy the aspects they notice superficially while undermining the principles they were originally established on.

Even when the changes are independently advantageous, it is common for changes to one part of a system to gratuitously break a variety of other parts that are dependent on it. Trying to manage and fix a complex web of inter-dependent software which is constantly changing and breaking is an overwhelming challenge for individual humans, and unfortunately often not a sufficient priority for groups and organizations.


> "Remember how people writing assembly thought compiled languages would rot your brain!"

No, I don't remember that and I've been around awhile. (I'm sure one could find a handful of examples of people saying that but one can find examples of people saying sincerely that the earth is flat.) It was generally understood that the code emitted by early, simple compilers on early CISC processors wasn't nearly as good as hand-tuned assembly code but that the trade-off could be worthwhile. Eventually, compilers did get good enough to reduce the cases where hand-tuned assembly could make a difference to essentially nothing but this was identified through benchmarking by the people who used assembly the most themselves.

If you want to sell us on change, please stop lying right to our faces.


Note also that it took, more or less, a hardware revolution in the form of RISC, to make compilers able to compete. A big piece of the RISC philosophy was to make it easier for compiler writers.

They eventually got there, (and I expect AI will eventually get there too), but it took a lot of evolution.


Really? X86 isn’t RISC and it ruled the world during, not before, the time of compilers.


Starting with the 386, the ISA got a lot more compiler friendly. Up to 286, each register had a specialised task (AX,CX,DX,BX means Accumulator, Count, Data,Base register). Instructions worked with specific regs only (xlat, loop). When 386 and 32 bits happened, the instructions became more generic and easily combinable with any register. I remember people raving over the power of the SIB byte or the possibility to multiply any pair of register. While not RISC, it got clearly more easy for compilers to work with the ISA, and I remember reading in magazines that this was an explicit design intention.


Lots of x86 assembly out there from that time period. Beating the compiler in the eighties and nineties was a bit of a hobby and lots of people could do it.

Modern ISA designers (including those evolving the x86_64 ISA) absolutely take into account just how easy it is for a compiler to target their new instructions. x86 in modern times has a lot of RISC influence once you get past instruction decode.


> Writing unit tests where needed makes you more productive in the long run.

Debatable? It has positive effects for organizations and for the society, but from a selfish point of view, you gain relatively little from writing tests. In your own code, a test might save you debugging time once in a blue moon, but the gains are almost certainly offset by the considerable effort of writing a comprehensive suite of tests in the first place.

Again, it's prudent to have tests for more altruistic reasons, but individual productivity probably ain't it.

> Writing in modern languages makes you more productive.

With two big caveats. First, for every successful modern language that actually makes you more productive, there's 20 that make waves on HN but turn out to be duds. So some reluctance is rational. Otherwise, you end up wasting time learning dead-end languages over and over again.

Second, it's perfectly reasonable to say that Rust or whatever makes an average programmer more productive, but it won't necessarily make a programmer with 30 years of C++ experience more productive. This is simply because it will take them a long time to unlearn old habits and reach the same level of mastery in the new thing.

My point is, you can view these through the prism of rational thinking, not stubbornness. In a corporate setting, the interests of the many might override the preferences of the few. But if you're an open-source developer and don't want to use $new_thing, I don't think we have the moral high ground to force you.


> In your own code, a test might save you debugging time once in a blue moon

It’s much more than this. You feel it when you make a change and you are super confident you don’t have to do a bunch of testing to make sure everything still behaves correctly. This is the main thing good automated tests get you.


What are 20 dud languages that have been hyped on HN? Not meaning to snark, serious question.


Early compilers really did suck. They were long term big wins for sure, but it wasn't unreasonable for someone who was really good at hand assembly, on tightly constrained systems, to think they could beat the compiler at metrics that mattered.

Compilers did get better, and continue to--just look at my username. But in the early days one could make very strong, very reasonable, cases for sticking with assembly.


> Remember how people writing assembly thought compiled languages would rot your brain!

Well, how'd you describe web apps of today if not precisely brainrot?

> They don't care about actual productivity, they care about feeling productive

Funny you'd say that, because that describes a large portion of "AI coders". Sure they pump out a lot of lines of code, and it might even work initially, but in the long run it's hardly more productive.

> It's a hard sell when an application moves a button!

Because usually that is just change for the sake of change. How many updates are there every day that add nothing at all? More than updates that actually add something useful, at least.


> Change is always a hard sell to a lot of people, even when it benefits them.

You're assuming that the change is beneficial to people when you say this, but more often than not that just isn't true. Most of the time, change in software doesn't benefit people. Software companies love to move stuff around just to look busy, ruin features that were working just fine, add user hostile things (like forcing Copilot on people!), etc. It should be no surprise that users are sick of it.


As someone who started out a GenAI skeptic, I’ve found the truth is in the middle.

I write a TON of one off scripts now at work. For instance, if I fight with a Splunk query for more than five minutes, I’ll just export the entire time frame in question and have GHCP (work mandates we use only GHCP) spit out a Python script that gets me what I want.

I use it with our internal MCP tools to review pull requests. It surfaces questions I didn’t think to ask about half the time.

I don’t know that it makes me more productive, but it definitely makes me more attentive. It works great for brainstorming design ideas.

The code generation isn’t entirely slop either. For the vast majority of corporate devs below Principal, it’s better than what they write and its basic CRUD code. So that’s where all the hyper productive magical claims come from. I spend most of my days lately bailing these folks out of a dead end fox hole GHCP led them into.

Unfortunately, it’s very much a huge time sink in another way. I’ve seen a pretty linear growth in M365 Copilot surfacing 5 year old word documents to managers resulting in big emails of outdated GenAI slop that would be best summarized as “I have no clue what I’m talking about and I’m going to make a terrible technical decision that we already decided against.”


What is GHCP?


It appears to be GitHub Copilot


Ah! I was trying to fit 4 words into the acronym, like “GitHub Hosting Cloud Platform” or something.


It's an excellent point - but a lot of the pressure to use AI in orgs is top-down and I've never seen that with useful tech tools before; they always percolated outward from the more adventurous developers. This makes me wary of the AI enthusiasm, even though I acknowledge that there is some genuine value here.


I felt the same way. The analogy I use is management dictating the tech-stack to use across the org. It does not make any sense ! They need to stay in their lanes, and let engineering teams decide what is best for their work.

Big tech's general strategy is get-big-fast - and then become too-big-to-fail. This was followed by facebook, uber, paypal, etc. The idea is to embed AI into daily behaviors of people whether they like it or not, and hook them. Then, once hooked, developers will clamor for it whether it is useful or not.


> I've never seen that with useful tech tools before

I've seen it all the time. Version control, code review, unit testing, all of these are top-down.

Tech tools like git instead of CVS and Subversion, or Node instead of Java, may be bottom-up. But practices are very much top-down, so I see AI fitting the pattern very well here. It feels very similar to code review in terms of the degree to which it changes developer practices.


Nope, all of those things were dev driven until they'd diffused out as far as management and only then did they start getting enforced top-down. Often in awful enterprise software ways actually.


But that's what I'm saying.

Obviously developers invented these things and initially diffused the knowledge.

But you're agreeing with me that they then got enforced top-down. Just like AI. AI isn't new or different like this. Developers started using LLM's for coding, it "diffused" so management became aware, and then it becomes enforced.

There's a top-down mandate to use version control or unit testing or code review or LLM's. Despite plenty of developers initially hating unit tests. Initially hating code review. These things are all standard now, but weren't for a long time.

In contrast to things like "use git not Subversion" where management doesn't care, they just want you to use a version control.


Sigh, enforced is always top down, sure, if you want to be pedantic. But normally the process starts with enthusiastic devs, propagates out through other devs until a consensus is reached (e.g. source control is the only sane way) and then management starts to enforce it - often with a crappy enterprise take on the basic idea (I'm looking at you IBM Team Connection and Microsoft Visual SourceSafe).

AI seems to have primarily been pushed top-down from management long before any consensus has been reached from the devs on what it's even good for.

This is unusual; I suspect the reason is that (for once) the tech is more suitable for management functions than the dev stuff. Judging from the amount of bulletpointese generation and condensation I've seen lately anyway.


It's not pedantic, it's the very issue being discussed.

And there have been plenty of enthusiastic devs regarding LLM's.

And the idea that "until a consensus is reached" is just not true. These practices are often adopted with 1/3 of devs on board and 2/3 against. The whole point of top-down directives is that they're necessary because there isn't broad consensus among employees.

It was the same thing with mobile-first. A lot of devs hated it while others evangelized it, but management would impose it and it made phones usable for a ton of things that had previously been difficult. On the balance, it was a helpful paradigm shift imposed top-down even if it sometimes went overboard.


Do you know a lot of devs who, having tried VCS, were against it?


I lived through the transition, so absolutely.

Early VCS was clunky and slow. If one dev checked out some files, another dev couldn't work on them. People wouldn't check them back in quickly, they'd "hoard" them. Then merges introduced all sorts of tooling difficulties.

People's contributions were now centrally tracked and could be easily turned into metrics, and people worried (sometimes correctly) management would weaponize this.

It was seen by many as a top-down bureaucratic Big Brother mandate that slowed things down for no good reason and interfered with developers' autonomy and productivity. Or even if it had some value, it wasn't worth the price devs paid in using it.

This attitude wasn't universal of course. Other devs thought it was a necessary and helpful tool. But the point is that tons of devs were against it.

It really wasn't until git that VCS became "cool", with a feeling of being developer-led rather than management-led. But even then there was significant resistance to its new complexity, in how complicated it was to reason about its distributed nature, and the difficulty of its interface.


No, not just like AI. The difference is that these things were pushed by people on the bottom for years and run successfully before management top caught up. Like, years and years.

AI does not have such curve. It is top down, from the start.


None of them was top down in companies I worked in at the time. They were all stuff developers read about and then pressured management and peers to start using.

Management caught up and started to talk about them only years later.


Yeah, RCS was what, early 80s? Devs I knew were mostly on CVS by mid 90s and around the time Subversion became common (late 90s) things like PVCS and Visual Source Safe were starting to be required by management. Perhaps a bit earlier with super technical orgs. That's a much more typical flow.


I think it's coming from both places, it's just that the top-down exhortations are so loud and insistent.

I wasn't around to experience it but my understanding is that this is what happened in the 90's with object oriented programming - it was a legitimately useful idea that had some real grassroots traction and good uses, but it got sold to non-technical leadership as a silver bullet for productivity through reuse.

The problem then, as it is now, is that developer productivity is hard to measure, so if management gets sold on something that's "guaranteed" to boost it, it becomes a mandate and a proxy measure.


I think that's a good comparison. I was around in the 90s and I do remember OOP being pushed by all sorts of people who weren't coders. It was being pushed as the "proper" way to code regardless of the language, size, platform, or purpose of the program in question.


We might be in the rare case where the current smoke and mirrors fad in leadership happens to be something actually useful.

Let’s not let the smoke and mirrors dictate how we use the tool, but let us also not dismiss the tool just because it’s causing a fad.


I'm wary rather than skeptical I think. There's clearly value here. Whether we're paying the true costs or not, however, won't be clear until all the VC fumes have cleared.

Much like the internet era actually - obviously loads of value, but picking out the pets.coms from the amazon.coms ... well, it wasn't clear at the time which was which; probably both really (we buy our petfood online) except that only one of them had the cash reserves to make it past the dot com crash.


AI is the first dev tool that makes a difference that is immediately noticeable even for higher layers, that's why they apply pressure.

The core problem, as OP called out, is change aversion. It's just that for many previous useful changes, management couldn't immediately see the usefulness, or there would've been pressure too.

Let's not forget that well-defined development processes with things like CI/CD, testing, etc only became widespread after DORA made the positive impact clearly visible.

Let's face it: Most humans are perfectly fine with the status quo, whatever the status quo. The outward percolation of good ideas is limited unless a forcing function is applied.


Execs can suddenly reliably measure productivity? Or does AI just give them the easier to measure, short term benefits.


They certainly realize when work is significantly accelerated. They might miss that it doesn't apply to all kinds of work equally, but they see some things go multiples faster, they see the folks behind those things using AI, and they draw conclusions.


Their view of work is roadmap chicanery tho


Surprisingly enough, and pretty ironic given this discussion is about GitHub, the company Dan Luu is talking about there is Microsoft (specifically the SmartNIC team), based on his Linked description of his 2015-2016 job.


Version control has quickly won. It was so popular that people kept writing new systems all the time. CI was popular. Most major open source projects had their own CI systems before GitHub.

"AI" on the other hand is shoved down people's throats by management and by those who profit from in in some way. There is nothing organic about it.


Version control is almost 50 years old. It has very slowly won.

AI adoption is, for better or worse, voluntarily or not, very fast compared to other technologies.


Version control took a while because most early version control systems were brittle and had poor developer UX. Once we got mercurial and git and nice web UIs, the transition was actually pretty fast IMHO.

The same could be true for coding agents too, or maybe not. Time will tell.


.... which is the problem here. The internet took decades. The iPhone didn't change anything this quickly either. We're seeing massive brain rot in many studies, no real world data that actual shows productivity gains.

This adoption rate / shoving is insane. It is not based on anything but dollars.


The way I think of it is the difference between financial wealth and real wealth.

No new real wealth can be created but financial wealth may transfer from the firms buying stuff to the large tech firms - thereby creating new financial wealth for big tech stockholders. In the long run the two should converge - but in the short run they can diverge. And I think that’s what we are seeing.


The fervor with which some feel the need to defend AI is what is incredible. Adoption, innovation, impact, not so much.

The attempt to compare it with Version Control, with sliced bread, with plumbing and sanitization practices. Think of any big innovation and compare it with it until people give in and accept this is the biggest bestest thing ever to have happened and it is spreading like wildfire.

Even AI wouldn't defend itself this passionately but it conquered some people's hearts and minds.


Sounds like a company full of seriously-terrible developers, from which no valid general conclusions can be drawn.

I use AI a lot myself, but being forced to incorporate it into my workflow is a nonstarter. I'd actively fight against that. It's not even remotely the same thing as fighting source control adoption in general, or refusing to test code before checking it in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: