Maybe it's time we make a simple web page 100KB again?
Is there some kind of CDN minification, adblocking and compression service?
Maybe even server side rendering of websites?
Then a smartphone would work fine with 1GB of RAM and everyone could be happy.
I've come to the opinion that for the vast majority of apps I've built, it could all be built using HTML + CSS (all built server side). I can sprinkle in little bits of interactivity using something like HTMX. And I'll have a website that is very easy to optimise, has phenomenal backwards compatibility, and gets rid of a whole class of issues associated with SPAs.
I often regret in my career not pushing back more on "requirements" that ended up requiring a more complicated app, whereas the customer would have been happier with a simpler solution.
I guess you're right, but it's more of a curve, though. Once you get to any decent level of complexity, it actually helps to have a framework instead of just going all HTML+CSS. Also it helps having something standard as react (that every web developer should fundamentally understand) than doing your custom stuff if other people will be working on it in the future.
There's a lot to say about the side effects of frameworks but there's a reason why everything converges towards that.
I think it's the other way around, a framework will get you up and running quickly, but then it becomes technical dept, and if your app is complicated you will end up fighting the framework.
If you write something from scratch it will take a while to reach to the abstraction level where you can work fast. But then you have a fully custom abstraction layer that is not a "one size fits all" but custom tailored for your needs.
Good luck with hiring, onboarding, and maintenance of your bespoke solution. Also with your resume when seeking your next gig. For any serious project, ignore community and ecosystem health at your peril. To be clear, we're talking about framework selection, not leftpad vs DIY.
I agree, most websites allow complex flows.
But I suspect, that most loads don't ever touch those.
There is probably an automated way to deliver just a flat page, and maybe even allow for the top 5 interactions without loading all the frameworks and libraries.
Apple could make it happen. For some reason when an iPhone won't load something, people blame the thing instead of the phone. If they made Safari show an error page when a page used more than 256MB of RAM, suddenly the problem would disappear overnight.
No. Safari is already so buggy that it often crashes with the “a problem repeatedly occurred” message. That same message is shown when a webpage requests too much RAM. The problem did not disappear. People rightfully blame iPhone.
I was a bit shocked the other day when Chrome gave me a complaint about memory shortage and said I had something like 200GB in website data on my 256GB laptop(!) All put there by various web pages without my knowledge.
Most people have phones that can handle webpages with 1-5MB JS bundles. Why artificially limit what you can do on the web? Why limit ourselves to 1GB RAM when more resources means tech becomes more useful?
Returning to simple webpages is popular idea on HN but it’s like wanting a car with no backup camera and crank windows. If your goal is to have your car be as simple as possible, then sure, but that’s not the case for most people.
Most people want their cars to be safe and convenient, and their webpages useful and rich, more so than they want to return to some idealized simplicity.
A simple webpage or blog with minimal styling that runs as an ARM binary on a TV remote is cool and fun but it’s not economically useful. It’s the equivalent of a manual scooter. We can build better apps (in the same way that car manufacturers can build less crappy infotainment systems) but optimizing for scarcity isn’t the answer in a world where abundance tends to grow.
(Edit: your downvotes mean nothing to me, I’ve seen what gets upvoted!)
Your mistake is assuming there is some correlation with usefulness and size.
The JS Gmail UI from 15 years ago was just as functional as the one today.
Websites that are supposed to be simple lists end up bloated and laggy because of really poor JS that makes one request per item iteratively to populate a list.
I do like the old JS Gmail UI. But the current JS Gmail UI doesn’t feel any slower. It is cluttered with more features, but some of them I find useful. (Displaying my calendar and being able to accept invites right in Gmail being a big one.)
As someone who used the HTML gmail interface right up until google pulled the plug: the JS version is much slower to load. Every morning, I get to have about 10 seconds thinking about how it used to be faster.
It absolutely is slower. To an extreme degree even. It takes 10 to 20 seconds to load and is incredibly sluggish to use on some low-end machines I use regularly.
I disagree that we should be optimizing for low end machines and holding back on product improvements for the 80% mass market. Technology improves, it’s one of its best traits. We don’t need to be stuck in the past.
I think those machines are super fun and a snappy Linux experience is very satisfying. I use a lightweight WM myself on Linux and prefer it over the heavy ones. But this segment is 0.1% of technology users and we shouldn’t constrain applications to the limited hardware that this population chooses to use.
I’d argue that in many of these instances, less is far more.
I want my car to just be really good at being a car, reliably get me from A to B. A Bluetooth connection to the stereo system is nice, but I don’t need a freaking 20” phablet right next to my face when I’m driving.
When I go to a website, I’m usually looking for information, to read something. I don’t often want fancy scroll and animations, I just want clear readable text free of distractions.
More and more these two examples seem to be going away, we’re losing the plot of what the point of these things are.
In a lot of ways, I agree with you. I think the key thing is that the complexity should be appropriate to what needs to get done.
Animations and etc. that distract from the actual content are superfluous. Agreed! I hate it when sites scrolljack.
But lots of HN posters want to impose the same austerity on every website, regardless of whether it’s appropriate. You can’t build Linear in 100KB of JS. Nor would you want to run it on 1 GB RAM. And that’s the case for a lot of economically useful applications.
Keeping things as simple as possible shouldn’t be the goal. It should be keeping it simple enough for the use case at hand.
You can do a lot with little, it just requires investing more in development which understandably most companies are uninterested in. Besides, plenty of websites are bloated as all hell. Why does a newspaper website, for example, have to be very much more than plain html?
Newspaper websites are a good example of bloat, true. I think if you’re in the business of primarily serving text content and not doing much interactive stuff, you don’t need a heavy site. A lot of them tend to cram their websites with trackers and ads and I guess that’s a business thing.
Tbh, it’s unpopular around HN, but I felt like AMP was a great experience for users. AMP pages were super fast and had no annoying banners - and none of my pet peeve: layout shift.
I’m glad you want that! But, most people wouldn’t. Also, electric seat adjustments give you way more options than the manual adjustments could. And typically with more precision than the under seat bar with discrete positions.
with you on manual seats? But crank windows? Nah man, power windows and locks are a requirement for me, as is a modern sound system, ac, and cruise control.
I suspect they correctly simulated everything.
Then on the test stand they turned the dial to 11 and saw a number higher than the simulations.
Probably not reliable at that power level, but fine for a record announcement.
What has been democratic about how the internet has evolved over the last 2 decades? Because as far as I can see, the internet has undergone a massive centralization into the hands of a few players with practically no regulation. Especially Google, which can make decisions such as adding AI Overviews to search results leading to millions of websites seeing a ~25% drop in organic traffic in the last few months.
In practice it means that we are pretty close to how ancient Greeks (in the city state of Athens) defined democracry: ~3% of the population decided - by voting - how the remaining 97% would live their lives.
Tech regulation, or lack thereof, tends to be "biggest pile of money wins", but in this case there's already large anti-Google and anti-AI constituencies which CF may be able to mobilize. Especially in the EU.
I don't feel the title is misleading, but it may be a cultural language difference.
The term 'painkiller' is reserved for strong pain relief, and wouldn't include things like ibuprofen. That made me think immediately of a non-opioid pain blocker not just a pain reliever.
>The term 'painkiller' is reserved for strong pain relief
Having claimed it was cultural. It have been helpful of you indicated which culture you felt this applied to.
In UK I'd say painkillers includes ibuprofen and paracetamol. I suppose with ibuprofen it's also referred to as an anti-inflammatory. Not sure how else one would refer to paracetamol other than with synonyms (analgesics) or euphemism (pain relief tablet).
Even in the UK it's not necessarily true. I wouldn't be surprised to hear it used that way, but I don't think my peer group in the UK would ever refer to NSAIDs as anything other than their brand or generic names.
I particularly disagree with the parent comment that calls this click bait. The topic's intrinsically interesting to anybody who'd be lured in by that title; it doesn't need "bait" and we all know NSAIDs exist.
The article's particularly good at citing its references inline, which I very much appreciated. Added this author to my RSS reader in fact.
The title is at the very least ambiguous. I expected an essay on the historic invention of NSAIDs/paracetamol as I also understand painkillers to include NSAIDs.
Paracetamol can be referred to as an antipyretic (fever reducing drug), and it's widely used for that in the same way ibuprofen is used as an anti inflammatory.
>The term 'painkiller' is reserved for strong pain relief
Maybe there's some very specific, limited medical context where this is the case but in common parlance it's not at all the case, search for "painkiller" in an online shop like Amazon and you'll find a whole lot of Paracetamol/Tylenol, and various NSAIDs (aspirin/ibuprofen) and manufacturers of those drugs actually use that specific term.
All dictionaries I've looked at (MW, OED, Cambridge) define painkiller broadly as any drug that relieves pain. Do you have a source for your claim that it's used narrowly? Here's mine: https://www.merriam-webster.com/dictionary/painkiller
Can you explain what the difference is? Would pain-relievers be substances that undo whatever is causing the pain, making them indirect, while painkillers act directly on the pain signals and their transmission?
It might be clickbait, but it's also pretty big news of a fairly universal topic and not in the vein of "This one secret that your doctor doesn't want to know!"
If the FBI tells you wallet A and wallet B belong to the same actor, how do you use that information, so that they can see it on their view, without leaking it to Europol?
We built something very similar back in 2016, in the jvm with unsafe memory and garbage-free data structures to avoid GC pauses.
The dynamic clustering is not too hard, are you able to dynamically undo a cluster when new information shows up?
Are you running separate instances per customer to separate the information they have access to?
Assuming by undoing you mean splitting the cluster:
A linked list can be split in two in O(1). When it comes to updating the roots for all the removed nodes, there is no easy way out, but luckily:
- This process can be parallelized.
- It could be done just once for multiple clustering changes.
- This is a multi-level disjoint set, not all the levels or sub-clusters are usually affected. Upper level clustering, which is based on lower confidence level, can be rebuilt more easily.
If by undoing you mean reverting the changes, we don’t use a persistent data structure. When we need historical clustering, we use a patched forest with concurrent hash maps to track the changes, and then apply or throw them away.
We use a single instance for all clients, but when one CFD server processes new block data, it becomes fully blocked for read access. To solve this, we built a smart load balancer that redirects user requests to a secondary CFD server. This ensures there's always at least two servers running, and more if we need additional throughput.
Then a smartphone would work fine with 1GB of RAM and everyone could be happy.