I've thought a lot about law-as-code, but my conclusion is always that bad actors will be given an advantage by being able to brute-force the code until they find a way to get away with whatever obviously-immoral-harmful stuff they want (imagine giga-corps spending a few millions on hardware to brute-force tax law - ROI probably even better than tunneling through mountains to grab stonks first..).
In the end it reminds me of a quote by Edmund Burke:
"Bad men obey the law only out of fear of punishment; good men obey it out of conscience - and thus good men are often restrained by it, while bad men find ways around it."
I'm wondering if it might be impossible to write a law that both prevents the sprit of what we want it to prevent, while also not preventing the spirit of what we don't want to prevent. :)
I recently used it to boot a ~1996 Compaq Presario from CD-Rom to image the hard-drive to a USB stick before wiping it for my retro-computer fun :)
It's kind of sad to hear "adult" people claim in all seriousness that it's reasonable that a kernel alone spends more memory than the minimum requirement for running Windows 95, the operating system with kernel, drivers, a graphical user interface and even a few graphical user-space applications.
I got this insight from a previous thread : you can run linux with gui on the same specs as win 95 fine if your display resolution is 640x480. The framebuffer size is the issue
I mean why is that a problem? Win95 engineering reflects the hardware of the time, the same way today's software engineering reflects the hardware of our time. There's no ideal here, there's no "this is correct," etc its all constantly changing.
This is like car guys today bemoaning the simpler carburetor age or the car guys before them bemoaning the model T age of simplicity. Its silly.
There will never be a scenario where you need all this lightweight stuff outside of extreme edge cases, and there's SO MUCH lightweight stuff its not even a worry.
Also its funny you should mention win95 because I suspect that reflects your age, but a lot of people here are from the dos/first mac/win 2.0 age, and for that crowd win95 was the horrible resource pig and complexity nightmare. Tech press and nerd culture back then was incredibly anti-95 for 'dumbing it all down' and 'being slow' but now its seen as the gold standard of 'proper computing.' So its all relative.
The way I see hardware and tech is that we are forced to ride a train. It makes stops but it cannot stop. It will always go to the next stop. Wanting to stay at a certain stop doesn't make sense and as in fact counter-productive. I wont go into this, but linux on the desktop could have been a bigger contender if the linux crowd and companies were willing to break a lot of things and 'start over' to be more competitive with mac or windows, which at he time did break a lot of things and did 'start over' to a certain degree.
The various implementations of linux desktop always came off clunky and tied to unix-culture conventions which dont really fit the desktop model, which wasn't really appealing for a lot of people, and a lot of that was based on nostalgia and this sort of idealizing old interfaces and concepts. I love kde but its definitely not remotely as appealing as win11 or macos gui and ease of use.
In other words, when nostalgia isn't pushed back upon, we get worse products. I see so much unquestionable nostalgia in tech spaces, I think its something that hurts open source projects and even many commercial ones.
I agree with this take. Win95's 4MB minimum/8MB recommended memory requirement and a 20MHz processor is seen as the acceptable place to draw the line but there were graphical desktops on the market before that on systems with 128K of RAM and 8MHz processors. Why aren't we considering Win95's requirements as ridiculously bloated?
Yep, at the time the Amiga crowd was laughing at the bloat. But now its suddenly the gold standard on efficiency? I think a lot of people like to be argumentative because they refuse to understand they are engaging in mere nostalgia and not actually anything factual or logical.
if you can compile the kernel though, there is no reason that W95 should be any smaller than your specifically compiled kernel - in fact it should be much bigger
> There will never be a scenario where you need all this lightweight stuff
I think there are many.
Some examples:
* The fastest code is the code you don't run.
Smaller = faster, and we all want faster. Moore's law is over, Dennard scaling isn't affordable any more, smaller feature sizes are getting absurdly difficult and therefore expensive to fab. So if we want our computers to keep getting faster as we've got used to over the last 40-50 years then the only way to keep delivering that will be to start ruthlessly optimising, shrinking, finding more efficient ways to implement what we've got used to.
Smaller systems are better for performance.
* The smaller the code, the less there is to go wrong.
Smaller doesn't just mean faster, it should mean simpler and cleaner too. Less to go wrong. Easier to debug. Wrappers and VMs and bytecodes and runtimes are bad: they make life easier but they are less efficient and make issues harder to troubleshoot. Part of the Unix philosophy is to embed the KISS principle.
So that's performance and troubleshooting. We aren't done.
* The less you run, the smaller the attack surface.
Smaller code and less code means fewer APIs, fewer interfaces, less points of failure. Look at djb's decades-long policy of offering rewards to people who find holes in qmail or djbdns. Look at OpenBSD. We all need better more secure code. Smaller simpler systems built from fewer layers means more security, less attack surface, less to audit.
Higher performance, and easier troubleshooting, and better security. There's 3 reasons.
Practical examples...
The Atom editor spawned an entire class of app: Electron apps, Javascript on Node, bundled with Chromium. Slack, Discord, VSCode: there are multiple apps used by tens to hundreds of millions of people now. Look at how vast they are. Balena Etcher is a, what, nearly 100 MB download to write an image to USB? Native apps like Rufus do it in a few megabytes. Smaller ones like USBimager do it in hundreds of kilobytes. A dd command in under 100 bytes.
Now some of the people behind Atom wrote Zed.
It's 10% of the size and 10x the speed, in part because it's a native Rust app.
The COSMIC desktop looks like GNOME, works like GNOME Shell, but it's smaller and faster and more customisable because it's native Rust code.
GNOME Shell is Javascript running on an embedded copy of Mozilla's Javascript runtime.
Just like dotcoms wanted to dis-intermediate business, remove middlemen and distributors for faster sales, we could use disintermediation in our software. Fewer runtimes, better smarter compiled languages so we can trap more errors and have faster and safer compiled native code.
Smaller, simpler, cleaner, fewer layers, less abstractions: these are all goods things which are desirable.
Dennis Ritchie and Ken Thompson knew this. That's why Research Unix evolved into Plan 9, which puts way more stuff through the filesystem to remove whole types of API. Everything's in a container all the time, the filesystem abstracts the network and the GUI and more. Under 10% of the syscalls of Linux, the kernel is 5MB of source, and yet it has much of Kubernetes in there.
Then they went further, replaced C too, made a simpler safer language, embedded its runtime right into the kernel, and made binaries CPU-independent, and turned the entire network-aware OS into a runtime to compete with the JVM, so it could run as a browser plugin as well as a bare-metal OS. Now we have ubiquitous virtualisation so lean into it: separate domains. If your user-facing OS only runs in a VM then it doesn't need a filesystem or hardware drivers, because it won't see hardware, only virtualised facilities, so rip all that stuff out. Your container host doesn't need to have a console or manage disks.
This is what we should be doing. This is what we need to do. Hack away at the code complexity. Don't add functionality, remove it. Simplify it. Enforce standards by putting them in the kernel and removing dozens of overlapping implementations. Make codebases that are smaller and readable by humans.
Leave the vast bloated stuff to commercial companies and proprietary software where nobody gets to read it except LLM bots anyway.
I wonder if it would be possible to have gone directly to Zed, without going through Atom first (likewise, Plan 9 would never have been the first iteration of a Unix-like OS). "Rewrite it in Rust" makes a lot of sense if you have a working system that you want to rewrite, but maybe there's a reason that "rewrite it in Rust" is a meme and "write it in Rust" isn't. If you just want to move fast, put things up on the screen for people to interact with, and figure out how you want your system to work, dynamic languages with bytecode VMs and GC will get you there faster and will enable more people to contribute. Once the idea has matured, you can replace the inefficient implementation with one that is 10% of the size and 10x the speed. Adding lots of features and then pruning out the ones that turn out to be useless may also be easier than guessing the exact right feature set a priori.
This is true, but it is generally true.
Even for UV-EPROMs the retention time can be as low as a 25 years, if kept warm, even with the window sealed correctly.
Magnetic drives are quite a lot better, around 50 years.
CD-RWs are somewhat wider in their stability, I have ~20 year old discs that are becoming unreadable because the actual foil is delaminating from the plastic disc. Meanwhile I have ~40 year old DS-DD floppies that are still fully readable even though their medium is in physical contact with the read/write heads (although here, again, storage conditions and especially the different brands/batches seem to make a difference).
The disaster that is "modern UX" is serving no one.
Infantilizing computer users needs to stop.
Computer users hate it - everything changes all the time for the worse, everything gets hidden by more and more layers until it just goes away entirely and you're left with just having to suck it up.
"Normal people" don't even have computers anymore, some don't even have laptops, they have tablets and phones, and they don't use computer programs, they use "apps".
What we effectively get is:
- For current computer users: A downward spiral of everything sucking more with each new update.
- For potential new computer users: A decreasing incentive to use computers "Computers don't really seem to offer anything I can't do on my phone, and if I need a bigger screen I'll use my tablet with a BT keyboard"
- For the so-called "normal people" the article references (I believe the article is really both patronizing and infantalizing the average person), there they're effectively people who don't want to use computers, they don't want to know how stuff works, what stuff is, or what stuff can become, they have a problem they cannot put into words and they want to not have the problem because the moving images of the cat should be on the place with the red thing. - They use their phones, their tablets, and their apps, their meager and unmotivated desire to do something beyond what their little black mirror allow them is so week that any obstacle, any, even the "just make it work" button, is going to be more effort than they're willing (not capable of, but willing) to spend.
Thing is, regardless of particular domain, doing something in any domain requires some set of understanding and knowledge of the stuff you're going to be working with. "No, I just want to edit video, I don't want to know what a codec is" well, the medium is a part of the fucking message! NOTHING you do where you work with anything at all allows you to work with your subject without any understanding at all of what makes up that subject.
You want to tell stories, but you don't want to learn how to speak, you want to write books, but you don't want to learn how to type, write or spell ? Yes, you can -dictate- it, which is, in effect, getting someone competent to do the thing for you.. You want to be a painter, but you don't care about canvas, brushes, techniques, or the differences between oil, acrylic and aquarelle, or colors or composition, just want to make picture look good? You go hire a fucking painter, you don't go whining about how painting is inherently harder than it ought to be and how it's elitist that they don't just sell a brush that makes a nice painting. (Well, it _IS_ elitist, most people would be perfectly satisfied with just ONE brush, and it should be as wide as the canvas, and it should be pre-soaked in BLUE color, come on, don't be so hard on those poor people, they just want to create something, they shouldn't have to deal with all your elitist artist crap!) yeah, buy a fucking poster!
I'm getting so sick and tired of this constant attack on the stuff _I_ use every day, the stuff _I_ live and breathe, and see it degenerated to satisfy people who don't care, and never will.. I'm pissed, because, _I_ like computers, I like computing, and I like to get to know how the stuff works, _ONCE_ and gain a deep knowledge of it, so it fits like an old glove, and I can find my way around, and then they go fuck it over, time and time again, because someone who does not want to, and never will want to, use computers, thinks it's too hard..
Yeah, I really enjoy _LISTENING_ to music, I couldn't produce a melody if my life depended on it (believe me, I've tried, and it's _NOT_ for lack of amazingly good software), it's because I suck at it, and I'm clearly not willing to invest what it takes to achieve that particular goal.. because, I like to listen to music, I am a consumer of it, not a producer, and that's not because guitars are too hard to play, it's because I'm incompetent at playing them, and my desire to play them is vastly less than my desire to listen to them.
Who are most software written for?
- People who hate computers and software.
What's common about most software?
- It kind of sucks more and more.
There's a reason some of the very best software on the planet is development tools, compilers, text editors, debuggers.. It's because that software is made by people who actually like using computers, and software, _FOR_ people who actually like using computers and software...
Imagine if we made cars for people who HATE to drive, made instruments for people who don't want to learn how to play.. Wrote books for people who don't want to read, and movies for people who hate watching movies. Any reason to think it's a reasonable idea to do that? Any reason to think that's how we get nice cars, beautiful instruments, interesting books and great movies ?
Fuck it. Just go pair your toaster with your "app" whatever seems particularity important.
I don't understand how the "proof" part works, like, what part of the input to the "proof generation" algorithm is so inherently tied to the real world that one cannot feed it "fake" data ?
My understanding is it can't. The proof is "this photo was taken with this real camera and is unmodified". There's no way to know if the photo subject is another image generated by AI, or a painting made by a human etc.
I remember when snapchat were touting "send picture that delete within timeframes set by you!" and all that would happen is you'd turn to your friend and have them take a picture of your phone.
In the above case, the outcome was messy. But with some effort, people could make reasonable quality "certified" pictures of damn near anything by taking a picture of a picture. Then there is the more technical approach of cracking a system physically in your hands so you can sign whatever you want anyway...
I think the aim should be less on the camera hardware attestation and more on the user. "It is signed with their key! They take responsibility for it!"
But then we need:
1. fully active and scaled public/private key encryption for all users for whatever they want to do
2. a world where people are held responsible for their actions...
I don’t disagree with including user attestation in addition to hardware attestation.
The notion of their being a “analog hole” for devices that attest that their content is real is correct on the face, but is a very flawed criticism. Right now, anybody on earth can open up an LLM and generate an image. Anybody on earth can open up Photoshop and manipulate an image. And there’s no accountability for where that content came from. But not everybody on earth is capable of projecting an image and photographing it in a way that is in distinguishable from taking a photo of reality. Especially when you’ve taken into consideration that these cameras are capturing depths of field information, location information, and other metadata.
I think it’s a mistake to demand perfection. This is about trust in media and creating foundational technologies that allow for that trust to be restored. Imagine if every camera and every piece of editing software had the ability to sign its output with a description of any mutations. That is a chain of metadata where each link in the chain can be assigned to trust score. If, an addition to technology signatures, human signatures are included, that just builds additional trust. At some point, it would be inappropriate for news or social media not to use this information when presenting content.
As others have mentioned, C2PA is a reasonable step in this direction.
Perhaps if it measured depth it could detect "flat surface" and flag that in the recorded data. Cameras already "know" what is near or far simply by focusing.
I wonder if a 360 degree image in addition to the 'main' photo could show that the photo was part of a real scene and not just a photo of an image? Not proof exactly but getting closer to it.
If someone cared enough to spend money on this I think it would be an easy to medium difficulty project to use an FPGA and a CSI-2 IP to pretend to be the sensor. Good luck fixing that without baking a secure element into your sensor.
My music listening speakers have two built in power amplifiers (one for the tweeter, one for the woofer) and have a DSP feeding right into a DAC, feeding right into those amplifiers.
There's a control box that comes with them, and when you plug a calibrated microphone into that box, and put it in the listening position, you can get it to do some frequency sweeps, one at a time, then they calculate a correction curve for each speaker, based on the actual response of the particular speaker in the particular room, and program that curve into the DSP of the speaker.
It's like night and day toggling the calibration on and off while listening to music.
And yes, as he says, the best hi-fi is just professional audio gear..
My music listening setup is simply a USB->AES converter box that feeds directly into the monitors, the monitors are a pair of Genelec 8050, and then the GLM box and volume knob. Never heard "hi fi" coming even close to it, not at the price, not at five times the price.
Same goes for headphones, you can't get much better than the simple and cheap DT990 (or 770 if you want them closed), sure, you can pay about 10 times as much for some Sennheiser hd800s, and those are pretty good, and I do have a pair of HE1000se, which are not only cheaper, but actually sound better too. But I'd never recommend anyone who's not as stupid as myself to buy anything "above" DT990.. And yeah, I EQ my headphones with a dbx 231x two channel 31 band EQ, and while that's not as scientific as the calibrated monitors, for a headphone listening experience, it gets pretty good.
In the end it reminds me of a quote by Edmund Burke: "Bad men obey the law only out of fear of punishment; good men obey it out of conscience - and thus good men are often restrained by it, while bad men find ways around it."
reply