Indeed, people confidently assert as established fact things like "brains are bound by the laws of physics" and therefore "there can't be anything special" about them, so "consciousness is an illusion" and "the mind is a computer", all with absolute conviction but with very little understanding of what physics and maths really do and do not say about the universe. It's a quasi-religious faith in a thing not fully comprehended. I hope in the long run some humility in the face of reality will eventually be (re)learned.
If your position is that brains are not actually bound by the laws of physics -- that they operate on some other plane of existence unbound by any scientifically tested principle -- then it is not only your ideological opposites who have quasi-religious faith in a thing not fully comprehended.
This. People do not understand the implications of the most basic facts of modern science. Gravitation is instantaneous action at a distance via an "occult" force (to quote Newton's contemporaries).
> Software development jobs must be very diverse if even this anti-vibe-coding guy thinks AI coding definitely makes developers more productive.
As a Professor of English who teaches programming to humanities students, the writer has had an extremely interesting and unusual academic career [1]. He sounds awesome, but I think it's fair to suggest he may not have much experience of large scale commercial software development or be particularly well placed to predict what will or will not work in that environment. (Not that he necessarily claims to, but it's implicit in strong predictions about what the "future of programming" will be.)
Hard to say but to back his claim that he was programming since the 90's his CV shows he was working on stuff that's clearly more than your basic undergraduate skill level since the early 2000's. I'd be willing to bet he has more years under his belt than most HN users. I mean I'm considered old here, in my mid 30's, and this guy has been programming most my life. Though that doesn't explicitly imply experience, or more specifically experience in what.
That said, I think people really under appreciate how diverse programmers actually are. I started in physics and came over when I went to grad school. While I wouldn't expect a physicist to do super well on leetcode problems I've seen those same people write incredible code that's optimized for HPC systems and they're really good at tracing bottlenecks (it's a skill that translates from physics really really well). Hell, the best programmer I've ever met got that way because he was doing his PhD in mechanical engineering. He's practically the leading expert in data streaming for HPC systems and gained this skill because he needed more performance for his other work.
There's a lot of different types of programmers out there but I think it's too easy to think the field is narrow.
I played with punch cards and polystyrene test samples from the Standard Oil Refinery where my father worked in the early 70’s and my first language after basic was Fortran 77. Not old either.
I grew out of the leaking ether and basaltic dust that coated the plains. My first memories are of the Great Cooling, where the land, known only by its singular cyclopean volcano became devoid of all but the most primitive crystalline forms. I was there, a consciousness woven from residual thermal energy and the pure, unfractured light of the pre-dawn universe. I'm not old either.
Thanks. I meant is more of in a joking way, poking fun at the community. I know I'm far too young to earn a gray beard, but I hope to in the next 20-30 years ;-) I still got a lot to learn till that happens
Maybe. But also what I though was a gray beard in my early 20's is very different from what I think a gray beard is now. The number of those I've considered wizards decreased, and I think this should be true for most people. It's harder to differentiate experts as a novice, but as you get closer the resolution increases.
Both definitely contribute. But at the same time the people who stay wizards (and the people you realize are wizards but didn't previously) only appear to be more magical than ever.
Some magic tricks are unimpressive when you know how they are done. But that's not true for all of them. Some of them only become more and more impressive, only truly being able to be appreciated by other masters. The best magic tricks don't just impress an audience, they impress an audience of magicians.
I think as I gain more experience, what previously looked like magic now always turns out to look a whole lot more like hard work, and frustration with the existing solutions.
The 30s is the first decade of life that people experience where there are adults younger than them. This inevitably leads people in their 30s to start saying that they are "old" even though they generally have decades of vigor ahead of them.
38 there. If you didn't suffer Win9x's 'stability', then editing X11 config files by hand, getting mad with ALSA/Dmix, writing new ad-hoc drivers for weird BTTV tuners reusing old known ones for $WEIRDBRAND, you didn't live.
I was greeted with blank stares by the kids on my team when they wanted to rewrite an existing program from scratch, and I said that will work for as well as it did with Netscape. Dang whippersnappers
Depends what you mean by "old". If you mean elderly then obviously you're not. If you mean "past it" then it might reassure you to know the average expecting mother is in her 30s now (in the UK). Even if you just mean "grown up", recent research [1] on brain development identifies adolescence as typically extending into the early thirties, with (brain) adulthood running from there to the mid sixties before even then only entering the "early aging" stage.
For my part, I'm a lot older than you and don't consider myself old. Indeed, I think prematurely thinking of yourself as old can be a pretty bad mistake, health-wise.
That was such a strange aspect. If you will excuse my use of the tortured analogy of comparing programming to wood working, there are is a lot of talk about hand tools versus power tools, but for people who aren't in a production capacity--not making cabinets for a living, not making furniture for a living--you see people choosing to exclusively use hand tools because they just enjoy it more. There isn't pressure about "you most use power tools or else you're in self-denial about their superiority." Well , at least for people who actually practice the hobby. You'll find plenty of armchair woodworkers in the comments section on YouTube. But I digress. For someone who claims to enjoy programming for the sake of programming, it was a very strange statement to make about coding.
I very much enjoy the act of programming, but I'm also a professional software developer. Incidentally, I've almost always worked in fields where subtly wrong answers could get someone hurt or killed. I just can't imagine either giving up my joy in the former case or abdicating my responsibility to understand my code in the latter.
And this is why the wood working analogy falls down. The scale at which damage can occur due to the decision to use power tools over hand tools is, for most practical purposes, limited to just myself. With computers, we can share our fuck ups with the whole world.
so what you are saying is that for production we should use AI, and hand code for hobby, got it. Lemme log back into the vpn and set the agents on the Enterprise monorepo /jk
Another key difference is that wood itself has built in visual transparency as to the goodness of the solution - as it is pretty easy to figure out that a cabinet is horrible (I do get that there are defects in wood joining techniques that can surface after some time due to moisture, etc - but still, lot of transparency out of the box). Software has no such transparency built in.
The advantage of hand coded solutions is that the author of the code has some sense of what the code really does and so is a proxy for transparency, vibe coded solutions not so much.
I mean, it is 2025 and still customers are the best detectors of bad software over all quality apparatus to date.
Now we have LLMs, the Medium Density Fiber Board of technology. Dice up all the text of the world into fine vectorized bits and reconstitute them into a flimsy construct that falls apart when it gets a little wet.
The world of the Digital Humanities is a lot of fun (and one I've been a part of, teaching programming to Historians and Philosophers of Science!) It uses computation to provide new types of evidence for historical or rhetorical arguments and data-driven critiques. There's an art to it as well, showing evidence for things like multiple interpretations of a text through the stochasticity of various text extraction models.
From the author's about page:
> I discovered digital humanities (“humanities computing,” as it was then called) while I was a graduate student at the University of Virginia in the mid-nineties. I found the whole thing very exciting, but felt that before I could get on to things like computational text analysis and other kinds of humanistic geekery, I needed to work through a set of thorny philosophical problems. Is there such a thing as “algorithmic” literary criticism? Is there a distinct, humanistic form of visualization that differs from its scientific counterpart? What does it mean to “read” a text with a machine? Computational analysis of the human record seems to imply a different conception of hermeneutics, but what is that new conception?
Exactly, I don't think ppl understand why programming languages even came about anymore. Lotsa ppl don't understand why a natural language is not suitable for programming and by extension prompting an LLM
Minidiscs proved that people were comfortable with lossy compression. It was to be many years before lossless audio became a thing again.
It always amused me how we were told the difference between lossless and lossy compression was undetectable to the human ear up until the big streaming services started providing lossless and even high res, at which point it was suddenly the best thing since sliced bread. However you feel about the audio, one way or another it's gaslighting.
Personally, on most music I can't tell decent quality lossy from lossless, but I listen to a lot of choral polyphony and also perform it so I have a good ear for it. When you're listening to 16 or in some cases up to 40 voices and can follow individual lines (single voices recognisable as particular people) you can hear it, and I disliked
minidisc and mp3 players for that reason. High res, though, makes no difference at all as far as I can tell.
They did no such thing. Sales numbers were tiny outside Japan. People only tolerate lossy compression when that’s all they are offered. Hence the streamers introducing lossless options years after launch due to demand.
Minidiscs were briefly widely available here in the UK and were only short-lived because they were almost immediately replaced by iPods and other mp3 players, also with lossy audio only. Nearly two decades went by during which the only portable music options not widely considered obsolete were lossily compressed, despite the fact you could still buy CDs and listen to them on the move. It's disappointing (and I certainly don't agree with it) but the vast majority of people do tolerate lossy compression even when there are lossless alternatives that are only marginally less convenient. Minidiscs and iPods proved it comprehensively and Bluetooth earbuds have done so again.
Edit: I'm very glad lossless is finally mainstream again but I'd be more inclined to believe it's due to "demand" if I weren't routinely the only person on the train wearing wired earphones.
It's a dubious comparison but not because of "per year". The comparison is implicitly to one year's worth of GDP, which is a currency amount.
It's dubious because whereas a year's worth of GDP has some claim to actually being the value of something (with many caveats but it's engineered to behave like that as much as possible), market cap isn't. It's the amount all the shares would cost if someone bought them all in one go for the price some shares were most recently purchased for, which would never happen.
Hashicorp was bought for $35 per share at a time when it was trading a little above $25. Not saying crazy market caps aren't a sign of a bubble (not sure how you'd read that in my comment), just that market cap is not the value of the company.
Variation in price doesn't prove that the market cap is not (a good estimate of) the company value for highly liquid stocks.
Value is subjective. Stock prices measure peoples perception of the value. Your thesis that it is incorrect can only come from 2 places (I think)
1. Dumb money - the market cant see that XYZ is overvalued or undervalued. My rebutal there is nonetheless XYZ has been valued by a conpletely open continuous auction that people are not restricted to participate in.
2. The parts are less than their sum. This may be somewhat true... total control over a company may be more (or less) valuable than splitting. But I dont think it is order of magnitude. And if it is, it is because the value to you isnt the value to me (the value of RAM to a gamer < value of RAM to OpenAI).
Well, based on that last share purchase, we have incontrovertible proof that there was indeed one person in history who thought it was worth 3x GPD.
And the fact that in the entire BSSTC shareholder universe, there wasn't any noticeable volume for a sell, or a registered sell limit, at a lower value leading up to the last peak.
That must have been a rough trade, but someone got something out at the last moment.
1. No, we don't have proof that there was one person who thought it was worth 3x GDP. We have proof that there was one person who thought a 0.001% share of the company was worth 0.003% of GDP or whatever. They could think it was worth that much for plenty of reasons; maybe they thought the share price would grow for a bit more before collapsing so that they could make some profit, maybe they invested in order to just be an investor and have a say in investor meetings or however things worked back then. Maybe it was a status thing.
2. Why are we using the opinion of one random person to determine the value of a company
> Why are we using the opinion of one random person to determine the value of a company
Please don’t invent strawman positions and reflect them on me. I said nothing of the kind.
Of course the company’s worth wasn’t what is implied by the peak trade.
But that price wasn’t set just by the peak buyer. Out of all the other shareholders and shares, nobody was offering a sale on that venue at a lower price.
Outside of all the idiosyncratic psychology of each individual, in aggregate, the market did “think” it wasn’t worth selling leading up to that point.
Then confidence began breaking.
Mania is mania. Bubbles are bubbles. They are not rational, but they are real, not the result of one person or two. Not the result of one peak trade.
Large groups of people start thinking something can’t come down. For a moment in time, a lot of people thought it wouldn’t (at least “yet”).
How far mania goes is what peak price reveals. That price is still a measure of the whole market at that moment.
To add to this, this type of thing happens all the time in crypto.
A coin will release 1/1000000000th of it's eventual supply, have some trades at 10c and then claim the value of the entire supply as the headline value.
Thanks to EXWM (not mentioned here), emacs has been my literal X window manager for several years. I installed it as a lark, thinking there's no way this will work properly, and just never stopped using it. It's brilliant.
EXWM is great, having the same flow to manage X applications as for emacs buffers is a huge benefit. My only concern is if X11 will be maintained sufficiently into the future to keep using it, currently there is no Wayland support in EXWM.
I don't mean adding threading to existing functionality, and I mostly wouldn't want that. I very strongly prefer emacs' behaviour of queueing up my input events and processing them deterministically regardless of how long it takes to get to them over eg. the JetBrains behaviour where something can happen asynchronously with my input events that can change their meaning depending on when it happens.
What I mean is having threading capabilities available for things that want to (and should) use them. AIUI some support for that was added in emacs 26, so it might already be good enough.
The relevance is that EXWM is single threaded, so the window management blocks when emacs does. I don't find that much of a problem with EXWM but I doubt it would fly for a Wayland compositor, though perhaps the separate server used in that emacsconf talk sidesteps the problem.
I once read a comment here or reddit explaining that the X11 developers moved to Wayland because the X11 code has turned into an unmaintainable mess that can't be worked with anymore. So the reasons are not drama, but just plain old tech debt.
This pre-packaged talking point is often repeated without evidence. The vast majority of X.org developers, including all of the original ones, simply moved to other venues at one point or another. Only a few, like Daniel Stone, have made contributions to both. And it shows in how many lessons had to be re-learned.
What is your evidence? A quick search on google (and the git commits) would show you that many wayland developers are significant former xorg developers.
1. Kristian Høgsberg the founder of wayland, did all the DRI2 work on xorg before becomming frustrated
2. Peter Hutterer was a main xorg developer and has been behind the wayland input system
3. Adam Jackson, long time xorg maintainer essentially called for moving on to wayland https://ajaxnwnk.blogspot.com/2020/10/on-abandoning-x-server... (I found that he was involved in wayland discussions, but not sure if he contributed code)
4. you already mentioned Daniel Stone
The only main xorg developer not involved in wayland arguably could be Keith Packard, although he made a lot of the changes for xwayland so I'm not sure if it is correct to say he did not have wayland involvement.
So who are the "vast majority of X.org developers"? I think people always read about the couple of names above and then think, "well there must have been hundreds of others", because they thought xorg was like the linux kernel. AFAIK xorg always only had low 10s of active developers.
The drama was mostly over whether or not Wayland should have been the replacement. AFAIU, everyone agreed X11 development was effectively unsustainable or at least at a dead end.
So is X11, though the reference implementation of X11 is also widely agreed to have some serious problems going forward on top of problems with the protocol itself.
You can also use EXWM in Xephyr, so you can have an emacs window with its own controlled windows instead of replacing the whole DE/window-manager. I suppose this doesn't work with multiple frames though.
I have been experimenting with xdotool windowmap/windowunmap and override_redirect (and maybe LD_PRELOAD?) to try get something like EXWM to work without creating another X server, by capturing windows. I'm doing this in vim though.
This was incredible to watch, and I have to chuckle at this title. It's obvious why the webcam matters, with people round the world watching, but the destruction of a webcam is such a tiny thing in comparison to the eruption itself it's strangely funny.
And that hasn't changed much since. At work and at home, I'm usually looking at emacs with no tab or menu bar, full screen on all monitors, with everything else (browser, etc) a virtual desktop switch away: exwm at home, one terminal emacsclient in ssh per monitor with a single daemon on linux server (accessed from Windows) at work. With many minor variations this is how my desktop has looked since my first programming job, which coincidentally was in 2002, but the details of the setup have changed a lot. The bit that has remained constant is that all I want on my monitor(s) when I'm programming is code.
Edit: Probably the most visible change is better fonts and font rendering.
Edit 2: To expand on "all I want is code": let's say there is a menu bar with maybe 10 menus and 100 or so items, and a project navigator thingy, and a compiler output window. I would much rather these things not take up permanent space on my screen. Every one of them shows information/commands that I can access with a key combination and in some cases some fuzzy completion after hitting a key combination. Any decent editor can do this and you can learn it in an afternoon, and if you're going to spend the next couple of decades in front of it it's worth getting rid of the pixels permanently allocated to advertising "you can do this thing".
It has a price for the person with the condition. For the person developing the cure it does not (except perhaps opportunity cost, money not made that could have been), whereas killing their patients can have an extremely high one.
reply