I mean, fission is already a miracle. If you described the tech to the people in the 1800s and told them we just keep using fossile fuels they would laugh at you
To me one of the great tragedies of our time is that we could probably solve fusion if we just invested like 50 billion or something in it. Instead wasting so much effort on things like quantum computers seems insane.
Solving fusion could usher in the golden age that atomic power failed to produce
The thing that worries me is that it's still not obvious that fusion wouldn't also be the extremely expensive, slow to build boondoggle that fission is.
Because it has the word nuclear nearby and we'll be surprised at how ignorant our regulators can be, or because it'll turn out to be less safe than we think and it'll get red taped to death like fission did, or some non regulatory reason?
There are fundamental reasons why fusion, at least the DT variety, will be more expensive than fission. It has to do with inherently low volumetric power density of DT fusion reactors. ITER is 400x worse than a PWR; ARC is 40x worse. So, the reactor itself becomes much larger and, because it's also more complex, much much more expensive. The other putative advantages of fusion cannot make up for this. And fission itself is too expensive, so DT fusion loses to a loser. It's a double loser.
If you want to see where energy will come from in a deregulated environment, look at Texas. New grid capacity there is solar and batteries. Even gas isn't being installed much; the Texas state government put down $7.2B to fund more gas capacity yet this money has been mostly spurned, I think < $400M has been taken. New nuclear is completely out of the picture there.
Because most plans for it still involve attaching a giant steam boiler to turn the heat it produces into electricity and that bit alone will cost more than renewable alternatives.
Call me ignorant, but I’d rather we focus on stuff like increasing photovoltaic cell efficiency (and possibly cost-efficiency) by the 40%-60% we’re leaving on the table keeping them fully loaded and cooking.
Simple physics upgrades, like rotating cones, or lines of panels to swap with each other in Arizona-parking-lot conditions, can take us further, faster, and cheaper.
Nuclear is only safe after and during spending a bunch of money to keep it that way.
That makes me uncomfortable, because we’ve never had more instability in my lifetime, as far as “wildly important things not being addressed”.
Fusion will be slow to commercialize. Proof of concept is going to be much harder than for fission reactors. But if and when POC is attained, building commercial fusion reactors will not have as nearly much project risk, much less waste management risk, no proliferation risk, and much less financial risk in decommissioning. If you screw it up you can expensively damage your reactor, but you don't spread fallout, and you don't have to guard your waste like it's plutonium.
Neither fission nor fusion are going to put any juice on the grid before the AI bubble resolves, and then the financial calculations will be totally different.
Fusion likely wouldn’t solve much. Fuel and disposal costs are a small part of nuclear costs. It’s amortized capex for the extremely expensive plant, then maintenance. Fusion would make both of those costs worse
Exactly right. Fusion's only hope (and IMO it's not a great one) is a system where entire parts of a fission plant can be deleted. Helion does this by not needing turbine + generator, but doing direct conversion of plasma energy to electrical energy. But even so, they have to struggle with capex and reliability. Their reactor is coupled with a huge bank of capacitors (Zap has a similar problem; it's startling how large the capacitor bank is compared to their small fusion cell.)
The great tragedy is that we already have a practically unlimited and environmentally safe source of energy, which is nuclear fission. And we simply don't use it at a significant scale because of irrational fears about meltdowns.
It is not rational aversion. Nuclear is currently 10,000 less dangerous per unit of energy produced than the largest sources of energy: coal, oil and natural gas. We could afford to let nuclear get 10x less safe, so that it becomes vastly less costly to deploy, and a very possible result would be that it would replace the largest sources of energy, and would still be three orders of magnitude less dangerous than the sources of energy it replaced.
Regulation is inescapable, because the maximum damage from a nuclear accident would exceed the value of the company operating the reactor. A rational business treats any liabilities larger that what it could pay as equivalent, regardless of how large they could become, and hence will underinvest in safety measures.
And I'm sure you will agree there is a great and sorry history of nuclear efforts failing to achieve their cost targets. At this point, it is clear that such targets are sales numbers, not something one should actually believe. One cannot make this history go away just by wishing, as nuclear advocates like yourself seem wont to do.
I agree fossil fuels should go, but that's not an argument they should be replaced by nuclear. It's the argument nuclear advocates used to be able to lie back and comfort themselves with, but then you all got blindsided by renewables and storage zooming past you. You have to address those now, not the old competition you wished you were still running against.
Of course regulation is necessary. My point is that current nuclear regulation is disproportionate to actual risk, and that this mismatch has made nuclear uncompetitive relative to energy sources that are demonstrably far more dangerous on a per-unit-of-energy basis.
Even compared to solar, nuclear has a stronger safety record when measured by deaths per TWh, and this is when taking into account the worst nuclear catastrophe, Chernobyl. I am not arguing that the future should be all nuclear, or even predominantly nuclear. I am arguing that the present regulatory regime reflects a mispricing of risk, particularly relative to hydrocarbons, and that this has pushed us into a suboptimal energy mix.
On cost overruns: the strongest correlation is with regulatory ratcheting, which also had harmful second order consequences for cost control from failing to reach larger scale construction, like bespoke designs and loss of construction continuity.
That’s not true. They are physically massive, incredibly complicated machines with all kinds of large scale pressure welding, forging, containment systems, 100s of miles of plumbing, and other serious large scale engineering. They will never be anywhere close to as cheap as something as dead simple & mass manufacturable as solar.
If they were intrinsically costly, they would have been costly in the U.S. in the 1960s, or in France in the 1970s-90s, or in South Korea today. It is because of regulatory ratcheting, and the effects of that (both direct and second order), that costs escalated.
NPPs are intrinsically Big Projects. The western world is almost universally suffering from Baumol’s cost disease - we cannot build Big Projects at a reasonable price anymore. Subways, bridges, NPPs, you name it - all cost many multiples of their inflation adjusted 1970 cost. And that’s before they inevitably blow their budget by 2-3x. Until you can somehow fix the labor / housing / management cost issues NPPs will not be affordable, even if you relax nuclear specific regs.
Mass manufactured things like solar and wind turbines do not suffer this.
Maybe it'll be a model running on a quantum computer that points us towards high temperature superconductivity, which would simplify the plasma confinement problem and unlock fusion for us.
- quick, in response to a clicked button -> why not just show feedback on the button?
- quick, in response to a keyboard shortcut -> ok
- seconds or more after an action, say, if your import/export is done -> fine, but have a more persistent notifications inbox or send the user an email too, because if you dismiss the toast, how do you get back to that result?
- when you've just navigated to a page, as a way to present an alert or advisory about the new page -> if it's important enough, why not show it as a persistent alert on the page itself?
Far too many toasts are used for the last use case. Part of the reason for this, I think, is because if you detect something weird in a React callback, you'd need to wire up a whole new state variable to show it, vs. just calling a global toast() function at the time where you learn about the weird thing. But is it really much more work than toasting to roll something like const {alertElement, addAlert} = useAlerts()? And have it speak your design language?
Your 50-tabs-open multitasking users will appreciate you.
IMO a toast showing up shouldn't be a direct/immediate response to user action at all, ever. Toasts are purposely designed to steal attention. Don't try to steal attention from the user performing an action just to confirm that the action occurred; you already had their attention!
> if it's important enough, why not show it as a persistent alert on the page itself?
Because it's a crutch against bad design in an era when many companies don't employ designers, or arguably even as an exploit to be leveraged by marketing to try to brute-force the conversion of users to sales.
While there's nothing inherently wrong with them, I've become so accustomed to toasts being misused/abused that my instinctual response to that visual stimuli is that it's focus-friction and tantamount to being spammed, even when they're being used in benign or debateably useful ways.
I always just used it to confirm your last action on a POST —> GET sequence. Eg confirming that your save went through/rejected (the error itself embedded & persisted in the actual page). Or especially if saving doesn’t trigger a refresh so success would be otherwise silent (and thus indistinguishable from failing to click).
You could have the button do some fancy transformation into a save button but I prefer the core page being relatively static (and I really don’t like buttons having state).
It’s the only reasonable scenario for toasts that I can think of though.
Maybe I'm just an aging cynic, but I'm waiting for the other shoe to drop when it comes to GLP-1s. There have been so many claims of positive benefits that it almost seems too good to be true. With them being so expensive, the producers have every incentive to upsell using any study they can get their hands or money on.
There have been some. I've heard about eyesight related issues. A quick google found this article [0] where results showed that people using GLP-1 drugs were 68.6 times more likely to develop certain types of vision problems.
This is also an extremely rare vision problem. So absolute numbers are very tiny. The absolute numbers for diabetes, weight related problems, etc far dwarf this.
Right. On the whole I think these things are incredible.. looking to try myself after reading here in HN the other day about it working for all sorts of distractions. Just wanted to point out it's not all sunshine and rainbows which would certainly be suspicious.
Literally too much water or aspirin can kill you. Some people are allergic to avocados. Driving kills huge numbers of people daily. Everything is about risk/reward, and looking at the macro picture. And right now the comorbidities for obesity are terrible in huge absolute numbers… something that GLP-1’s can take down in significant magnitude. Unless we learn that the majority of users end up with something worse than obesity, they’re a huge win for public health.
A large drop in HbA1c does cause early worsening of diabetic retinopathy. Regardless of how it's achieved. So expect some noise in generalized data.
Personally, I went from mild background retinopathy to PDR and getting laser treatment in about 3 months. My ophthalmologist (who has an academic background) didn't really know if this diagnosis had the same "quality" of someone who "naturally" progresses to PDR, but some studies say it's transient.
A lot of the issues are hydration-related, and I wouldn’t be surprised if the eye ones are, too. Some water intake is from food, so if you eat less, you need to drink more. If you also tend to drink with food, and you’re eating less, you may drink less instead of the more that you need to be. Add in a generally dulled “I crave something” sense and you’ve got a recipe for not just going all day without eating, but also without drinking.
I’m not a doctor but iirc water consumed along with a meal is absorbed slower and therefore results in longer-lasting hydration - than just a bare glass of water on an empty stomach. Of course, eating might add more material that encourages dehydrating, so I don’t know if you’d get a net benefit from a bag of teriyaki beef jerky say.
It's a little suspicious... 68x risk with semaglutide, no significant risk with tirzepatide. Case-control studies that merely search these databases are only really useful for hypothesis generation.
GLP-1s have been peescribed for like 20 years, but have been limited more to diabetics and extreme cases. So there is pretty good data. Not to say there isnt going to be side effects in some population sample, but we need to compare that with obesity and diabetes (which is a very bad disease).
But also do long-term studies; one thing I gathered (anecdotal through the internet so take it with a grain of salt) is that people revert to their old habits when they stop taking it. Not always, of course, and I think using it should always be done with guidance of a dietician etc to make lifestyle adjustments if needs be, but it did imply that long term usage is a factor that needs to be considered.
SSRIs have been prescribed for 37 years, and society is just starting to understand that under current prescribing protocols, they do more harm than good.
Also, isn't the dose used to treat obesity 3 times higher than the dose used over those 20 years to treat diabetes?
Getting people to eat more broccoli is almost entirely upside. Sure a handful of people will be allergic or whatever, but on a population level some interventions are just one positive after another, and there's no reason it has to be a deal made with the devil.
Actually there is a very real effect on which foods you find appealing and which ones are kind of gross. It’s a thing the food companies have been studying, and their own studies show that people on GLP1s tend to skip the junk food aisle and head towards the produce section instead.
Oddly enough semaglutide is making me crave sugar more. It might be the frequent sensation of having low blood sugar. Idk.
It does make me choose more dense meals though since I know I can't eat that much due to delayed gastric emptying. But I have to budget some room for prunes to counteract the constipation. It definitely makes you think about what you eat.
I can confirm that. On GLP-1s (when they worked for me, anyway), I'd routinely think "pizza? Bleh, so fatty, I'd really like some chicken breast with roast potatoes instead right now".
Oh no, you have torn through the flaws in my argument like bullets through paper, however will I live this down? Unless I clearly meant "it makes previously-desirable food undesirable", anyway.
I was not trying to tear your argument down. The comment you replied to was about carbs being specifically disgusting and in my head potatoes are the runner up to bread for classic examples of carbs. I was simply asking about what seemed like a contradiction. I have been looking into GLP1s and have not seen/heard people mention that GLP1 make carbs gross.
I think it varies per person. For me, it didn't specifically make carbs gross, but it did make unhealthy food less palatable. I think that's what the GP was talking about as well, they were just a bit more specific.
It really depends on the person, though. They worked for me for a while and don't work now, but I'm a small minority, from what I've heard from people. When they worked, they were great.
The automobile's net effect on behaviours has (as others have noted) evolved over that period, as has its net effect on transportation and urbanisation patterns.
Up until the end of WWII, automobile ownership was relatively limited. It was just beginning to accelerate at the beginning of the war (in the US), but rationing and war-time defence manufacturing curbed that trend, and sustained rates of alternative transport, particularly rail.
Post-war, there was a mass-consumer blitz, much of it revolving around automobiles, and changes such as commuter suburbs (based around automobiles), superhighways, self-service grocery stores, shopping malls, and strip-mall based retail development began, all trends which evolved over the next 50+ years.
In the 1970s and 1980s, it was quite common for children to walk or ride bikes to school, or take a school bus (which involved walking several blocks to a nearby stop). Since the late 1990s, far more seem to be ferried in private cars, usually by parents, who spend a half-hour or more in pick-up lines. It's not uncommon for children walking along neighbourhood streets to be reported (and collected) by authorities by concern for their safety, and their parents subject to investigation or worse. Suburban, and even urban development patterns have been to ever-lower-density and far more bike- and pedstrian-unfriendly modes.
Recreational, occupational, educational, and other transport and activity patterns are largely away from self-powered movement (walking, cycling, etc.) and toward motorised options (sometimes including e-bikes, electric scooters, or equivalents, though most often automobiles).
Societal change and consequent impacts take time and have long lags.
I don’t know. Having listened to a number of interviews with some of the founders in this area of drug research I came away with a much higher respect and significantly less cynicism toward big pharmaceutical. Novo Nordisk is run by a nonprofit even.
I'm sure there will be negative side effects but the main outcome of these drugs is that you eat less. Many of us have trained ourselves to eat at a frequency and volume way beyond what is really required to keep our body functioning. This leads to weight gain in most people and thus is the focus but even independent of weight there are effects of continuously eating poor quality foods which are unlikely to be good. So I'm not surprised that there are all these miraculous sounding positive side effects to drugs which prevent most people from putting their metabolic system under near constant load.
When the side effects are better understood I suspect for the average person, eating less would be a net benefit to their overall health - _even if they don't lose any weight_.
I’m sure some negative effects will be found but from what I understand lowering your weight outweighs (no pun intended) a lot of possible side effects. Closest thing to a miracle cure and quality of life improvement
Haven't you been reading Hackernews for the past 10 years? Sugar has been implicated in pretty much every major late-life disease, and the closest thing to a cure before GLP-1 agonists was fasting.
I am sort of in your boat in seeing what may come. There are a few very rare conditions but the benefits seem to out weigh (ha, I will take the pun!) The down sides.
While it might mean the incident rare of some things goes up, those that it reduces are far more impactful and where far more likely to have mortality issues. Sort of like how Chemotherapy is poisonous but potential has better long term odds, only chemo is far more extreme than GPL1.
Time will tell but so far it is looking kind of good with a few lesser issues.
Basically, the gastro-intestinal side effects are the biggest issue, along with CVS (not the store) and possibly eye problems.
That said, the negative side effects look to be incredibly rare and manageable (including via stopping treatment) -- and the positives are quite tremendous.
It's not a magic drug, but it is the first of it's kind with such a skew to the positive on side effects.
Most medications have negative side effects because otherwise our bodies would already have whatever changes they make through evolution. My personal theory (based on nothing but my own intuition) is that GLP-1s are an adaptation to the modern world that evolution hasn't caught up with yet.
And we know what the adaptation is: calorie constraint. We evolved in a calorie constrained environment. We don't live in one now. Our set point for desire to eat is clearly too high. None of this means that glp-1 inhibitors don't have other side effects, of course.
This sounds like the argument during the pandemic, "If masks work, then why didn't we evolve permanent masks? Checkmate atheists." Though I do understand the impulse that evolution is working towards some unknowable perfection because of how I was taught evolution during high school, that is, of course, not how it works.
Given all the potential money, if they are issues, I expect it to go down like tobacco companies back in the days actively suppressing undesirable research by harassing researchers, influencing peer review journals or/and funding research casting doubt on the benefits of this drug. Chances are that any negative effects won't be obvious until it's too late. Look at microplastics, they have been around for just over a century and it's only now that we are starting to realize that they have several negative effects.
I agree. I think it's unlikely that negative effects can go unnoticed for very long, but in the short term I'm only like 97% sure we're getting the full story.
That said, it's probably certain enough for me to be open enough to using them now, if my doctor recommends it.
Several members of my family are into glp-1 both for glucose control and for weight loss. Taking different brands (wegovy, ozempic and others.) They all mention.th terrible secondary effects when you eat something "forbidden" (tacos, cake or icecream e.g.) .
Also It causes constipation apparently, which for most of them is not that much of an issue, but given that I've IBS-C, I'm happy to not have to take it.
I'm surprised that tacos are a big deal. I'd have thought that the filling (meat, cheese, veggies, maybe beans) would mostly outweigh the carbs from the shell.
More anecdata, my spouse and I have been on Mounjaro since Jan 2025 guided by private health insurance.
I have suffered almost the entire gamut of side effects from the beginning until I tried split dosing twice a week, and even then there’s still the occasional instance of me learning that I should not have eaten that and the following 9 hours are going to revolve around stomach pain.
My partner’s journey on the other hand has been smooth sailing the entire time.
YMMV, do your own research but definitely double check any search results with your doctor first… lots of urban myths going around.
I do recommend it though, I am the healthiest I’ve been in literally 10 years.
I was in that boat too but with NAFLD and now liver fibrosis despite not eating all that much sugar and having a BMI that is high but partially due to muscle I finally gave in to see if semaglutide will help.
Only on week 3 but it's been a rollercoaster. It seems to have quite a broad spectrum of effects. I'm still not sure I'll be able to stay on it but losing 10 pounds is a nice counterpoint to the side-effects.
The fact is though that but-for taking the drugs a lot of the folks that take these things would be long dead before, say, the GLP-1 induced cancer kicked in.
> I'm waiting for the other shoe to drop when it comes to GLP-1s
We know there are downsides. They’re just irrelevant compared to being obese. (Or alcoholic. Or, potentially, overweight.)
It might be a vitamin, where there literally aren’t any downsides. I’m sceptical of that. But to the degree there is mass cognitive bias in respect of GLP-1s, it’s against them. (I suspect these are sour grapes due to the drugs being unreachable for many.)
My frank concern is we’re separating into a social media addicted, unvaccinated and obese population on one hand and a wealthy, insured, disease free and fit one on the other. Those are dangerous class and physical divides to risk becoming heritable (socially, not genetically).
GLP-1’s should make you less concerned in that case, they’re poised to become extremely affordable very soon. Ending the obesity epidemic will do more to bridge the class divide than anything I can practically imagine. Not to mention the other compulsions these drugs help moderate - alcohol, tobacco, gambling etc. It’s my best hope for worldwide quality of life improvement in the next 10 years.
My opinion has shifted over the years. At first I also thought it was largely just sour grapes re: accessibility and fear of the unknown, but now I’m thinking that a large number of people are going to be so far deep into anti-GLP opinions and hot takes they can’t backtrack out of it. Much like political or social beliefs you make into your identity. Too embarrassing to admit you might be wrong.
I know you’re alluding to the same thing, it’s just interesting to me someone else in the world seems to share these thoughts. I also think it may really delineate a multi-generational class divide that is hard to break.
Or all the folks on GLP-1s will develop some rare form of cancer and die early leaving the world to the so-called haters.
> Maybe I'm just an aging cynic, but I'm waiting for the other shoe to drop when it comes to GLP-1s. There have been so many claims of positive benefits that it almost seems too good to be true.
Well, read up the testimony of those who stopped taking it for adverse effects, such as nonexistent intestinal transit and -yuck- sulfur burps.
This isn't true, the heart and kidney benefits appear independent of weight loss. I would encourage you to let the physicians speak to these effects instead of making educated conjecture; it is tough to keep ahead of all of the claims about these medications with my patients.
GLP-1s are just showing what people always knew to be true but was not clinically actionable — most of our health problems come from eating too much and being fat.
Well, now it's actionable. No magic, just adherence.
Yeah I stopped because I didn't like the way it made me feel. I needed it because my blood sugar was way too high and it helped me drop close to 60 pounds in 6-8 months, but I did not like how it made me feel and I lost more muscle than I was happy with.
I've gained about 15-20 pounds back, but I'm now much healthier overall.
I like how my brain works and I didn't like something affecting or changing that because I couldn't put the fork down. Easy decision for me
Maybe? I don't think so though. I may have written incorrectly because my personality itself did not change, but it was a massive change seeing eating and drinking remarkably smaller amounts of stuff and I knew it was because of the drug slowing down my digestion. I'd rather have my agency to make mistakes and maybe eat half a pizza when I want to instead of finishing a slice and feeling queasy at the thought of a second one.
I agree. A better response is, "maybe GLP-1 drugs are really great or maybe
the drug companies, which spend most of their time and money trying to manipulate opinion (i.e., by bribing researchers and clinicians, which is not illegal) are at it again."
Right. This is what we heard about the COVID therapies. And we all know how that turned out to be little more effective than placebo for healthy non-comorbid people.
Same. I think that pharmaceutical industry is lot more bleak now than it was when Fen-Phen became popular. GLP-1 usage is largely off-label as far as I know, but I wouldn't trust them even if it wasn't. There is a mountain of precedent for these companies to choose profit over health, and for our government(s) to aid them in covering up evidence of negative effects on the latter for the sake of the former.
The popularity of these drugs is specifically from the FDA-approved "weight loss" indication. You're at least a few years behind. I would also think the many many years when it was only prescribed for diabetes would have yielded some data about negative effects, (other than the ocular issue) if there were any. Glp-1s were so unprofitable, Novo Nordisk let their Canadian patent lapse almost a decade ago, rather than pay the upkeep fee lol. So I dont think anyone is protecting them from bad press.
I’m more interested in how people determine who they trust, and the parameters by which humans decide to trust someone.
I would wager that people are shit at determining trustworthiness based on limited information (like social media representations). In the old days before social media, you got to know people in person, and decades ago, most of the people you knew were likely people you grew up around. You knew that person’s background, how they treated people, what their family was like, and what likely influences them as a person.
So much of how we process trustworthiness is how we perceive the motives of the speaker. With shallower friendships and parasocial relationships, we want to feel connected but really lack any good context that you need to actually know who you’re listening to.
A person's trustworthiness has always been based more on perception though, even if you were familiar with more of their history - that's how you end up with members of a community who are perfectly kind people but are ostracized because they're perceived as strange and untrustworthy in some way; it's also how you end up with members of a community who have demonstrated a lack of trustworthiness continuing to be trusted, because they can appear trustworthy and persuade others to trust them despite the prior evidence.
Parasocial relationships are more analogous to the old priestly/shamanic to tribesman relationships. A person more or less removed from the direct social experience that has their own hidden motives and meanings, as well as strong incentive to maintain their position in the group dynamic as someone to look up towards.
Funny how humans evolved such to have such a predilection for finding these few charismatic people to uplift and throw their whole lot behind damned any other logic. While also having a small subset of people preferring to take the reins themselves and be the charismatic leader either for good or ill intent. And it has been that way in our species since long before recorded history.
Almost like queen bee to worker bee dynamics in terms of population structure but perhaps less rigid. The mutation rate of charismatic leaders vs followers happens to be “just right” by some mechanism. Too many of either case and group dynamics fall apart.
If we think of the whole population as a meta organism up a few levels of abstraction from the genetic level, but still bound by its laws generally, some mechanism must have evolved to carefully regulate dosage of these varying neurotypes in the population much like how genes evolved downstream, upstream, or midstream dosage control mechanisms to modulate protein levels in the cell for biological function. Perhaps social structure is self reinforcing through incentives and entropy.
GitButler created huge problems for me twice - it automatically added files to commits, including one that contained secrets. Okay, I know I had to add them to .gitignore, but why didn't it prompt me to add the files? There were even logs and cache files, among other things.
For me it's.. okay it's my daily driver already, but I really really want extensions to be able to create their own UI elements in the buffer, like VSCode does. Basically GPUI for extensions.
This would unblock people to write their own Jupyter integration for example, or whatever else they want. There's load of cool stuff like Argus https://github.com/cognitive-engineering-lab/argus that rely on creating buffers with custom UI, and Flowistry https://github.com/willcrichton/flowistry that rely on graying out some code, and I want this stuff on Zed too
Has the team commented on this? Coming from Emacs, it seems insane to not implement an API to the UI. GPUI looks great too, it’d be a real shame if they opted to keep the extensibility limited to just LSP servers and whatnot.
A git worktree directory contains a .git that just references the original directory’s .git, and Zed doesn’t support this configuration. So, there just isn’t any representation of change tracking when working in a worktree directory.
Gotcha. Is there a request in for this? The team seems incredibly productive (I'm sometimes offered multiple updates per day), and my completely uninformed and naive take is that this probably wouldn't be too big of a lift, relative to the stuff I'm seeing them ship regularly.
Two things. First, some economists study stated versus revealed preferences. [1] The idea is to figure out what people do rather than what they say they will do.
Second, in the case of people making feature requests, it could be a net-societal-gain [2] if feature requesters made some kind of binding commitment. (See also the hold-up problem [3].) Perhaps a potential customer would commit to "if/when feature X gets added, I will commit to using the product for 2 hours." or "... I will spend $10 on the associated cloud services." (The question of what happens if the customer reneges also has to be agreed upon up front.)
Okay, so what kind of solution are you looking for here? VS Code uses a closed-source LSP server for its C# extension. Rider is it's own custom stuff, of course.
So...where does that leave the Zed team? If existing LSPs aren't good enough, that's not a Zed problem: they're building an editor, not LSPs for your favorite language.
Interesting. For Claude Code, this seems to have generous overlap with existing practice of having markdown "guides" listed for access in the CLAUDE.md. Maybe skills can simply make managing such guides more organized and declarative.
It's interesting (to me) visualizing all of these techniques as efforts to replicate A* pathfinding through the model's vector space "maze" to find the desired outcome. The potential to "one shot" any request is plausible with the right context.
> The potential to "one shot" any request is plausible with the right context.
You too can win a jackpot by spinning the wheel just like these other anecdotal winners. Pay no attention to your dwindling credits every time you do though.
On the other hand, our industry has always chased the "one baby in one month out of 9 mothers" paradigm. While you couldn't do that with humans, it's likely you'll soon (tm) be able to do it with agents.
2025 feels like a cardinal year for top-down decisions we all just have to endure for the present. The best we can do is bitch loudly and often, and hope the people at the top still feel threatened by consumers/constituents.
This Liquid Glass decision is particularly challenging for my tiny startup. We have multiple platforms including iOS and Android. I was hoping to share much of our design language across iOS and Android, but now Apple has essentially decided that this Liquid Glass will be mandatory after a year of support for "compatibility mode" that disables it for your app.
We'll now have to spend expensive engineering time to cater to Apple's design whims rather than actually working on PMF and profitability.
And what about online tutorials, marketing, user manuals, customer support? You probably want your app to look consistent with that too, right? Do you really expect or even want to sift through multiple different versions of tutorials and guides?
As long as an app is easy to use, people prefer a single look. No one cares about "looking like the OS", except maybe 0.1% of users.
> As long as an app is easy to use, people prefer a single look
No, people are used to an UI language, which in the case of iOS is quite consistent across applications. You expect certain things to work (e.g. flicking in from the left edge means "go back"). There are platform-specific patterns and I'd rather have the app behave accordingly rather than being consistent with other OS' version. The real 0.1% here are probably the users of your app with active devices in both Android and iOS!
This just isn't supported by data. Breaking users into two categories: users who develop a universal mental model of software, and users who develop application specific mental models. The latter group is the overwhelming majority. People don't learn iOS, they learn Spotify.
Designing with this in mind annoys the hell out of people in the former group, no doubt. Those people are likely love customizable software so they can make it the same everywhere. It's super common in Linux setups.
Most people only have one mobile device, a smartphone. Those that have more than one usually still have those with matched OSes. No-one cares about "looking like the OS", but people do care about looking (and, more importantly, behaving) like all the other apps they use.
And as far as manuals and customer support, what you're saying is that you can't afford to do cross-platform properly, and so you're cutting corners. Which is fine if it's stated explicitly upfront, and having an app that behaves weirdly (for a given platform) is better than no app, but please don't insult your users' intelligence by presenting that as some kind of feature.
> And as far as manuals and customer support, what you're saying is that you can't afford to do cross-platform properly
No. What I'm saying is, when people search "Blender lighting tutorial" or "Capcut editing tutorial" on youtube, they want to watch the most popular tutorial and they want it to behave exactly like their phone. If there are any differences whatsoever, like an OS specific swipe back gesture, they're going to leave negative reviews that their app is not working.
You want a good unified app experience, with as minimal deviation as possible only where necessary.
If the app is Blender, or some other extremely complex software then sure. If it’s the bank app or social media, it would take me no time to understand the difference between the iOS and Android native UI.
You were wrong to even attempt to share design language across platforms. You should make your applications good native citizens if you have any respect for your users, because yours isn’t the only software they use.
That’s a pretty bold statement. Look at the most popular apps, and you will see across Android and iOS that the designs across platforms are more similar than they are different. We only have 2 engineers right now, but we still maintain clean native implementations for navigation, interactions, and areas where native UI excels. Neither our Android or iOS apps appear as if we just copy-pasted from one platform to the other. Both Android and iOS had been leaning into flat design for years, so it was easy to adapt the same design language for our brand across both. Not so with this return of skeuomorphic design.
I think this idealism reveals a naive viewpoint about what users really care about. They care that apps work - that they do what they're supposed to and do it fast or efficiently. Not even Microsoft makes apps for their own platform that are native apps (example Teams, the new Outlook), and they service millions of users. Indeed, if you look at Microsoft's UI over the years, they are inconsistent as hell (all of the Office apps throughout the years is a good example), but so long as performance, functionality and usability hasn't suffered too much, users are OK with non-native apps that do not appear native. Another example is iTunes on Windows - looks nothing like a native Windows app.
There's also the fact that having control over your own apps UI/design language is better over the long term. What if Apple decides to ditch this liquid glass for something else years in the future? They ditched their own design language in iOS7, and now with iOS26 they've done it again.
And the basis for UI redesigns as wide ranging as this are almost entirely nonsensical. Does liquid glass suddenly improve usability by whatever percent? Nope - I guarantee Apple does NOT interrogate or benchmark their UI designs in the same way as NN Group does. Usability is actually hurt by the fact users need to re-learn basic interactions, and existing ones are now slower. Is overall performance improved over the previous version? Absolutely not - performance metrics such as battery life and UI responsiveness have regressed with the over use of visual effects like translucency and minute pixel manipulations. Why bother following changes to a design language when they are not based on real reasoning backed up by actual data or solid logic, and they end up regressing performance to an even worse state? Why should any app vendor be obligated to follow what are ultimately arbitrary and whimsical changes?
Redesigns such as this result in literally more work for the sake of it, zero net improvements and whole lot of wasted effort, all for what? Just to look different for a while, until the next redesign?
On desktop that ship has sailed. Maybe 2 of my regularly used apps have a native design.
UIs have converged enough that the experience is acceptable I guess. And as a devolper, why in the world would I want to write my app for a locked-in ecosystem with a now shitty design-system.
It hasn't fully done so on the desktop when you consider macOS. E.g. if you ship a macOS app which has the main menu bar inside the window (or missing entirely) instead of using the system menu bar, it will look very jarring and users will rightly complain.
If the only way I interact with a service is a single app then I want that app to blend into my phone. I don't care if the Uber app on Android and iOS are the same, I only see one of them. If I have to use a service on many different platforms, I sometimes prefer having a consistent design language, e.g. I like that Slack has a consistent sidebar interface everywhere. I want to go from the browser to tablet to phone and not have anything in a different spot.
The trend over the past decade has been towards multiplatform frameworks, mostly with React Native, but more recently Flutter, KMP, and even Swift multiplatform.
And here's the thing: The Apple users who actually care about this are in the minority. You just get an outsized sampling of them on HN because they tend to be techies as well.
Our large commercial apps were certified to pass the WCAG accessibility requirements, which we need to comply with for legal reasons. If we did’t enable the compatibility flag and opt out of the glass altogether, this would have meant massive breakages and regressions, which would require the designer and developer time to fix, and the financial burden of having to go through the certification process again. And all because of Apple’s whims and zero benefits for our customers or our developers. Why would we blindly choose to follow Apple’s missteps instead of having our own design system and standards?
> You should make your applications good native citizens
It's time to retire this dead meme. The most successful SAASes in the world are just websites that people pay for hand-over-fist regardless of what OS they use. Netflix doesn't use Liquid Glass, Spotify doesn't bother. Google Docs isn't going to inherit it and probably neither will Office 365. Websites online by-and-large won't adopt this design either.
The ideal of everyone taking the time to make a sexy native UI is appealing. But it's never going to fully be realized, especially when OEMs resist basic A11Y obligations and insist on battery-draining eye candy.
If you're starting from the perspective of a native app developer, you're absolutely correct. However, most startups are going to be websites/Electron/CEF apps. It's much easier and cheaper to write-once-ship-everywhere with an ugly React UI than it is to jump through the hoops of writing special-snowflake versions for every OS under the sun.
It's basically negligent to insist on native apps, if profitability is your goal. I love native interfaces too, but the staunch belief in businesses being a "good native citizen" is a dead meme. It's cart-before-horse logic, we don't ever see anyone commit to the idea and reap real rewards. Native platforms punish you for playing by the rules.
It depends who your application is for. You obviously think building an application is about maximizing your profit, and your users are just a means to achieve that. If you were approaching your application from a “what’s best for my users” angle you might make different choices.
If you are running a business with limited funding (which is most businesses), then your primary need is to seek profit in a world where profit is often never achieved at all. Otherwise, your business ceases to exist, along with your app. Sometimes that does mean emphasis on strong design, which I’d argue means delivering a great experience to your users rather than a native or non-native design choice. Other times, you’re serving a demographic that doesn’t care so much about that, and your focus is on functionality above all.
As a developer, I don't care what Apple or Google's "design language" whims are today. If someone can't figure out how to use a well designed app, no matter the "design language", a fancy skin isn't going to fix that.
I don't think that's true. First line of respect is good UI/UX, second line of respect is being fast/not being slow, third might be being coherent with the rest of the apps on that platform.
Almost nobody uses both an iPhone and Android for their day to day use. It doesn’t matter if your iOS and Android apps don’t look share the same design language, no one is going to see both of them.
And most apps that eschew the base design system while using the same kinds of GUI elements it provides, are usually terrible. Only when the app does something way out of the lane of what the OS provides does it usually work well.
> but now Apple has essentially decided that this Liquid Glass will be mandatory after a year of support for "compatibility mode" that disables it for your app.
What exactly does this mean? Are there references in Apples design guide lines that explain this in more details? (Or wherever this would be documented)
Not really, the point versions have betas as well. I'm on 26.1 beta 2 on iOS.
You should not expect any change in design at that stage normally I guess, but I'm still seeing aesthetic differences, for example the shine around icons is reduced.
I run it from within a dev container. I never had issues with yolo mode before, but if it somehow decided to use the gcloud command (for instance) and affected the production stack, it’s my ass on the line.
This is a good takeaway. I use Claude Code as my main approach for making changes to a codebase, and I’ve been doing so every day for months. I have a solid system I follow through trial and error, and overall it’s been a massive boon to my productivity and willingness to attempt larger experiments.
One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.
Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.
This is more of a reflection of how our profession has not meaningfully advanced. OP talks about boilerplate. You talk about grunt work. We now have AI to do these things for us. But why do such things need to exist in the first place? Why hasn't there been a minimal-boilerplate language and framework and programming environment? Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
This is the glaring fallacy! We are turning to unreliable stochastic agents to churn out boilerplate and do toil that should just be abstracted or automated away by fully deterministic, reliably correct programs. This is, prima facie, a degenerative and wasteful way to develop software.
Saying boilerplate shouldn’t exist is like saying we shouldn’t need nails or screws if we just designed furniture to be cut perfectly as one piece from the tree. The response is “I mean, sure, that’d be great, not sure how you’ll actually accomplish that though”.
Great analogy. We've attempted to produce these systems and every time what emerges is software which makes easy things easy and hard things impossible.
Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.
No furniture is really designed from a single tree, though. They aren't massive enough.
I agree with overall sentiment. But the analogy is higly flawed. You can't compare physical things with software. Physical things are way more constrained while software is super abstract.
I can and will compare them, analogies don’t need to be perfect so long as they get a point across. That’s why they’re analogies, not direct perfect comparisons.
I very much enjoy the Japanese carpentry styles that exist though, off topic but very cool.
I can tell you about 1000 ways, the problem is there are no corporate monetary incentives to follow them, and not much late-90s-era FOSS ethos going around either...
This is a terribly confused analogy, afaict. But maybe if you could explain in what sense boilerplate, as defined in https://en.wikipedia.org/wiki/Boilerplate_text, is anything like a nail, it could be less confusing.
Saying boilerplate should exist is like saying every nail should have its own hammer.
Some amount of boilerplate probably needs to exist, but in general it would be better off minimized. For a decade or so there's sadly been a trend of deliberately increasing it.
While it sounds likely true for the US, it's the opposite in Germany:
likely due to societal expectations on "creature comforts" and German homes not being framed with 2x4's but instead getting guild-approved craftsmen to construct a roof for a brick building (with often precast concrete slabs forming the intermediate floors; they're segmented along the non-bridging direction to be less customized).
We’re limited by the limits of our invention though. We can’t set the parameters and features to whatever we want, or we’d set them to “infinitely powerful” and “infinitely simple” - it doesn’t work like that however.
Well, depending on the value proposition, or the required goals, that’s not necessarily true. There are pros and cons to different approaches, and pretending there aren’t downsides to such a switch is problematic.
Yes and its why AI fills me with impending doom: handing over the reigns to an AI that can deal with the bullshit for us means we will get stuck in a groundhog day scenario of waking up with the same shitty architecture for the foreseeable future. Automation is the opposite of plasticity.
Maybe if you fully hand over the reigns and go watch Youtube all day.
LLMs allow us to do large but cheap experiments that we would never attempt otherwise. That includes new architectures. Automation in the traditional sense is opposite of plasticity (because it's optimizing and crystalizing around a very specific process), but what we're doing with LLMs isn't that. Every new request can be different. Experiments are more possible, not less. We don't have to tear down years of scaffolding like old automated systems. We just nudge it in a new direction.
I don’t think that will happen. It’s more like a 3d printer where you can feed in a new architecture and new design every day and it will create it. More flexibility instead of less.
Ground Hog day is optimistic, I think. It will be like "The Butterfly Effect": every attempt to fix the systems using the same dumb, wrote solutions will make the next iteration of the architecture worse and more shitty.
When humans are in the loop everything pretty much becomes stochastic as well. What matters more is the error rate and result correctness. I think this shifts the focus towards test cases, measurement, and outcome.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
Everything we do is a stochastic process. If you throw a dart 100 times at a target, it's not going to land at the same spot every time. There is a great deal of uncertainty and non-deterministic behavior in our everyday actions.
> throw a dart ... great deal of uncertainty and nongdeterministic behavior in our everyday actions.
Throwing a dart could not be further away from programming a computer. It's one of the most deterministic things we can do. If I write if(n>0) then the computer will execute my intent with 100% accuracy. It won't compare n to 0.005.
You see arguments like yours a lot. It seems to be a way of saying "let's lower the bar for AI". But suppose I have a laser guided rifle that I rely on for my food and someone comes along with a bow and arrow and says "give it a chance, after all lots of things we do are inaccurate, like throwing darts for example". What would you answer?
As much as it’s true that there’s stochasticity involved in just about everything that we do, I’m not sure that that’s equivalent to everything we do being a stochastic process. With your dart example, a very significant amount of the stochasticity involved in the determination of where the dart lands is external to the human thrower. An expert human thrower could easily make it appear deterministic.
If we are talking in terms of IRL/physics, there is no such thing as a deterministic system outside of theory - everything is stochastic to differing degrees - including you brain that came up with these thoughts.
I think that both of you are right to some extent.
It’s undeniable that humans exhibit stochastic traits, but we’re obviously not stochastic processes in the same sense as LLMs and the like. We have agency, error-correction, and learning mechanisms that make us far more reliable.
In practice, humans (especially experts) have an apparent determinism despite all of the randomness involved (both internally and externally) in many of our actions.
stochastic vs deterministic is arguable a property of modelling, not reality.
Something so complex that we cannot model it as deterministic is hence stochastic. We can just as easily model a stochastic thing by ignoring the stochastic parts.
separating subjective appearance of things from how we can conceptualise them as models begs a deeper philosophical question of how you can talk about the nature of things you cannot perceive.
Not interested in joining a pile-on, but I just wanted to point out how difficult reproducible builds are. I think there's still a bit of unpredictability in there, unless we go to extraordinary lengths (see also: software proofs).
This is very true. For the most basic approaches of using stochastic agents for this purpose, especially with genralized agents and approaches.
It is possible to get much higher quality with not just oversight, but creating the alignment from the stochastic agents to have no choice but to converge towards the desired vector of work reliably.
Human in the loop AI is fine, I'm not sure that everything doesn't to be automated, it's entirely possible to get further and more reps in on a problem with the tool as long as the human is the driver and using the stochastic agent as a thinking partner and not the other way around.
How big a dent do you think we could make if poured $252 billion dollars[0] just into paying down all our towers of tech debt and developing clean abstractions for all these known problems?
nothing prevents stochastic agents from producing reliable, deterministic and correct programs. it's literally what the agents are designed for. it's much less wasteful than me doing the same work and much much less wasteful trying to find a framework for all frameworks.
Reduced mental load. When it’s proven that a set of input will always result in the same output, you don’t have to verify the output. And you can just chain process together and not having to worry about time wasted because of deviations.
Good point. Non-determinism is not fundamentally problematic on many levels. What is important is that the essential behavioral invariants of the systems are maintained.
My take: money. Years ago, when I was cutting my teeth in software, efficiency was a real concern. Not just efficiency for limited CPU, memory, and storage. But also how you could maximize the output of smaller head count of developers. There was a lot of debate over which methodologies, languages, etc, gave the biggest bang for buck.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
Many of the languages that allow people to quickly develop software end up with their own tradeoffs. Some of them have unusual syntax, at least in part of the language. Many of them allow duck typing, which many consider a major detriment to production reliability. Some of them are only interpreted. Some of them have a syntax people just don’t like. Some of them are just really big languages with lots of features, because getting rid of the boilerplate often means more builtins or a bigger standard library. Some of them either the runtime or the build time leaves a lot to be desired.
Here’s an incomplete list for those traits. For unusual, there’s many of the FP languages, Ada, APL, Delphi/Object Pascal, JS, and Perl. For duck typing, there’s Ruby, Python, PHP, JS, and Perl. For only interpreted, there are Ruby, PHP, and Perl (and formerly for some time Python and JS). For syntax that’s not necessarily odd (but may be) but lots of people find distasteful there’s Perl, any form of Lisp, APL, Haskell, the ML family, Fortran, JS, and in some camps Python, PHP, Ruby, Go, or anything from the Pascal family. For big languages with lots of interacting parts there’s Perl, Ada, PHP, Lisp with CLOS, Julia, and PHP. For slowdowns, there’s Julia, Python, PHP, and Ruby. The runtime for Perl is actually pretty fast once it’s up and running, but having to build the app before running it on every invocation makes for a slow start time.
All that said, certain orgs do impressive projects pretty quickly with some of these languages. Some do impressively quick work with even less popular languages like Pike, Ponie, Elixir, Vala, AppScript, Forth, IPL, Factor, Raku, or Haxe. Notice some of those are very targeted, which is another reason boilerplate is minimal. It’s built into the language or environment. That makes development fast, but general reuse of the code pretty low.
We have been emphasizing on creating abstractions since forever.
We now have several different hardware platforms, programming languages, OS's, a gazillion web frameworks, tons of databases, build tools, clustering frameworks and on and on and on.
We havn't done so entirely collectively, but I don't think the amount of choice here reflects that we are stupid, but that rather that "one size doesn't fit all". Think about the endless debates and flame wars about the "best" of those abstractions.
I'm sure that Skynet will end that discussion and come up with the one true and only abstraction needed ;)
I feel this some days, but honestly I’m not sure it’s the whole answer. Every piece of code has some purpose or expresses a decision point in a design, and when you “abstract” away those decisions, they don’t usually go away — often they’re just hidden in a library or base class, or become a matter of convention.
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
I actually prefer the world with boilerplate connecting more important pieces of code together, over opinionated frameworks, because the boilerplate can evolve, charging the opinionated frameworks is much harder, and it's probably done by full rewrite. The thing is, the boilerplate needs to be kept to minimum, that's what I consider good API design. It allows you to do custom things, so you need some glue code, but not so much that you are writing a new framework each time you use it.
> Why hasn't there been a minimal-boilerplate language and framework and programming environment?
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
It's very minimal-boilerplate. It's done an exceptional job of eliminating procedural, tedious work, and it's done it in a way that doesn't even require macros! "Template Haskell" is Haskell's macro system and it's rarely used anymore.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
I find of all languages, Haskell often allows me to get by with the least boilerplate. Packages like lenses/optics (and yes, scrap your boilerplate/Generics) help. Funny package, though!
>Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
Lisp completely eliminates boilerplate and has been around for decades, but hardly anyone uses it because programs that use macros to eliminate boilerplate aren't easy to read.
It used to be. When I learned to program for windows, I will basically learn Delphi or Visual basic at the time. Maybe some database like paradox. But I was reading a website that lists the skills needed to write backend ant it was like 30 different things to learn.
That's exactly what I have in mind when I wrote the original comment. I learned Visual Basic as a kid faffing around a computer and it was so little boilerplate to make an app. It's been a regression since the.
We have the component architecture pattern to reduce the amount of html we have to write. If you’re duplicating html element in every page, that’s mostly on you. There’s a reason every template language have include statement. That’s a problem that’s been solved for ages.
Theres a million different million environments. This includes, OS, languages, frameworks and setups within those frameworks. Spring, java or kotlin, rest or grpc, mysql or postgres or, okhhtp or ktor, etc etc.
There is no software you could possibly write that works for everything thatd be as good as "Give me an internal dashboard with these features"
> Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
i think it has. How much easier is it today than yester-decade to write, and deploy an application to multiple platforms (and have it look/run similarly)?
I think this one way of looking at what your parent was describing.
They weren’t just saying ‘AI writes the boilerplate for me.’ They were saying: once you’ve written the same glue the 3rd, 4th, 5th time, you can start folding that pattern into your own custom dev tooling.
AI not as a boilerplate writer but as an assistant to build out personal scaffolding toolset quickly and organically. Or maybe you think that should be more systemized and less personal?
Why haven't we collectively emphasized the creation of new tools to reduce boilerplate and grunt work?
You dont understand how things evolve.
There have been plenty of platforms that got rid of boilerplate - e.g. ruby on rails about 20 years ago
But once they become the mainstream, people can get a competitive edge by re-adding loads of complexity and boilerplate again. E.g. complex front end frameworks like react.
If you want your startup to look good you've got to use the latest trendy front end thingummy
Also to be fair, its not just fashion. Features that would have been advanced 20 years ago become taken for granted as time goes on, hence we are always working at the current limit of complexity (and thats why we're always overrun with bugs and always coming up with new platforms to solve all the problems and get rid of all thr boilerplate so that we can invent new boilerplate)
Because of the obsession with backwards compatibility and not breaking code. The web development industry is the prime example. HTML, Javascript, CSS, a backend frontend architecture - absolutely terrible stack.
I don't even know why things like templating and inclusion are not just part of the core web stack (ideally declaratively with no JS). There should be no need for an external tool or build process or third-party framework.
Html is rendered document. It’s ok to write it if you only need one document, but it’s better to use an actual template language or some generators if you’re going to have the same layout and components for many pages.
You’re asking to shift this job from the editor (you) to the viewer (the browser).
Maybe it was a "viewer" in the 90s. The viewer is not a viewer - it is a full fledged application runtime that has a developer environment and media stack, along with several miscellaneous runtimes. A standard template language and document inclusion feature is very small peanuts compared to that. A teeny house compared to the galaxy already built-in - with several planets worth of features being added yearly.
You both make good points, and I come down on the side of adding some template mechanism to web standards. Of course, that all starts with an RFC and a reference implementation. Any volunteers?
Would raise my hand to volunteer for the reference implementation. I guess it would need to be in C++/Rust ? RFC, however, involves way too much talking and also needs solid networking amongst the web crowd. Not qualified for that. For a template language, it would be better to copy a subset from an existing de-facto standard like jinja2 which already has a lean, performant subset implementation at https://github.com/Keats/tera.
Document/template inclusion model should be OK now in modern era thanks to HTTP/3. Not really sure how that should ideally look like though.
Because the set of problems we make to be solvable with code is huge and the world is messy. Many of these things really are at a very high level of abstraction and the boiler plate feels boilerplatey but is actually slightly different in a way not automatable. Or it is but the configuration for that automation becomes the new bit you look at and see as grunt work.
I adopted a couple practices (using dev containers and worktrees) just to make life a little easier. I also built my own shell script “framework” to help manage the worktrees and create project files. However, that took me just a couple days to do on my own (also using CC), and it didn’t lock me into a specific tool.
I do agree that context poisoning is a real thing to watch out for. Coincidentally, I’d noticed MCP endpoint definitions had started taking a substantial block of context for me (~20k tokens), and that’s now something I consider when adopting any MCP.
reply