That's why you'd want to be able to replace the mainboard, screen, keyboard, speakers, trackpad, etc., and not just the RAM. Like https://shop.mntre.com/products/mnt-reform, but presumably easier for non-technical people to use.
Don't get me wrong, I like the approach, but if you want a laptop, it's the wrong tradeoff and not one that 90% of the users will take. Sure, _some_ will choose this, and small companies can do well to sustain themselves but it probably won't do a dent in ewaste numbers and you won't see iphone like adoption numbers, relegating it to just a niche product.
I built my last company on OpenBSD. It was easy to understand the entire system, and secure-by-default (everything disabled) is the right posture for servers. Pledge and unveil worked brilliantly to restrict our Go processes to specific syscall sets and files. The firewall on OpenBSD is miles better to configure than iptables. I never had challenges upgrading them--they just kept working for years.
It's barely usable by itself but I don't think it's an inherent problem of
seccomp-bpf, rather the lack of libc support. Surely the task of "determine
which syscalls are used for feature X" belongs in the software that decides which
syscalls to use for feature X.
The "what does the equivalent of pledge(stdio) actually mean?" doesn't have to actually be on the kernel side. But it's complicated by the fact that on Linux, syscalls can be made from anywhere. On OpenBSD syscalls are now only allowed from libc code.
So even if one uses Cosmopolitan libc, if you link to some other library that library may also do direct syscalls. And which syscalls is does, and under which circumstances, is generally not part of the ABI promise. So this can still break between semver patch version upgrades.
Like if a library used to just not write debug logs by default, but then changed so that they are written, but to /dev/null, then there's no way to inform application code for that library, much less update it.
If you ONLY link to libc, then what you said will work. But if you link to anything else (including using LD_PRELOAD), then all bets are off. And at the very least you'll also be linking to libseccomp. :-)
If libc were the only library in existence, then I'd agree with your 100%.
> So even if one uses Cosmopolitan libc, if you link to some other library
that library may also do direct syscalls. And which syscalls is does, and
under which circumstances, is generally not part of the ABI promise. So this
can still break between semver patch version upgrades.
Well but isn't that a more general problem with pledge? I can link to
libfoo, drop rpath privileges, and it'll work fine until libfoo starts
lazily loading /etc/fooconf (etc.)
A nice thing about pledge is that it's modularized well enough so such
problems don't occur very often, but I'd argue it's not less common of an
issue than "libfoo started doing raw syscalls." The solution is also the
same: a) ask libfoo not to do it, or b) isolate libfoo in an auxiliary
process, or c) switch to libbar.
> And at the very least you'll also be linking to libseccomp. :-)
libseccomp proponents won't tell you this, but you can in fact use seccomp
without libseccomp, as does Cosmopolitan libc. All libseccomp does is
abstract away CPU architecture differences, which a libc already has to do
by itself anyway.
No, for two reasons: 1) pledge() lets you give high level "I just want to do I/O on what I already have", and it doesn't matter if new syscalls "openat2" (should be blocked) or "getrandom" (should be allowed) are created. (see the `newfstatat` example on printf). And 2) OpenBSD limits syscalls to be done from libc, and libc & kernel are released together. Other libs need to go through libc.
Yes, if libfoo starts doing actual behavioral changes like suddenly opening files, then that's inherently indistinguishable from a compromised process. But I don't think that we need to throw out the baby with that bathwater.
And it's not just about libfoo doing raw syscalls. `unveil()` allows blocking off the filesystem. And it'll apply to open, creat, openat, openat2, unlink, io_uring versions of the relevant calls (if OpenBSD had it), etc…
But yes, if libc could ship its best-effort pledge()/unveil(), that also blocks any further syscalls (in case the kernel is newer), that'd be great. But this needs to be part of (g)libc.
Though another problem is that it doesn't help child processes with a statically compiled newer libc, that quite reasonably wants to use the newer syscalls that the kernel has. OpenBSD decided to simply not support statically linked libc, but musl (and Cosmopolitan libc?) have that as an explicit goal.
So yeah, because they mandate syscalls from libc, ironically OpenBSD should have been able to make pledge/unveil a libc feature using a seccomp-like API, or hell, implemented entirely in user space. But Linux, which has that API, kinda can't.
(ok, so I don't know how strictly OpenBSD mandates the exact system libc, so maybe what I just said would open a vulnerability)
> 1) pledge() lets you give high level "I just want to do I/O on what I
already have", and it doesn't matter if new syscalls "openat2" (should be
blocked) or "getrandom" (should be allowed) are created. (see the
`newfstatat` example on printf).
You can do this with seccomp if you're libc. A new syscall is of no
consequence for the seccomp filter unless libc starts using it, in which
case libc can just add it to the filter. (Of course the filter has to be an
allow-list.)
> And 2) OpenBSD limits syscalls to be done from libc, and libc & kernel are
released together. Other libs need to go through libc.
That avoids one failure mode, but I think you assign too much importance to
it. If your dependency uses a raw syscall (and let's be honest this isn't
that common), you'll see your program SIGSYS and add it manually.
If you have so many constantly changing dependencies that you can't
tell/test which ones use raw syscalls and when, you have no hope of
successfully using pledge either.
> But I don't think that we need to throw out the baby with that bathwater.
We agree here, just not on which baby :)
> And it's not just about libfoo doing raw syscalls. `unveil()` allows
blocking off the filesystem.
You're right, seccomp is unsuitable for implementing unveil because it can't
inspect contents of pointers. I believe Cosmopolitan uses Landlock for it.
> Though another problem is that it doesn't help child processes with a
statically compiled newer libc
If you're trying to pledge a program written by somebody else, expect
problems on OBSD too because pledge was not designed for that. (It can work
in many cases, but that's kind of incidental.)
If it's your own program, fine, but that means you're compiling your binaries
with different libcs and then wat.
> So yeah, because they mandate syscalls from libc, ironically OpenBSD
should have been able to make pledge/unveil a libc feature using a
seccomp-like API, or hell, implemented entirely in user space. But Linux,
which has that API, kinda can't.
My take is "it can, with caveats that don't matter in 99% the cases pledge
is useful in." (Entirely in user space no, with seccomp yes.)
But only in very small sandboxes, right? Yes, seccomp could potentially be used for your JIT/interpreter sandbox. And because it inherently executes untrusted input, that's definitely the most important place.
But compare how many applications execute untrusted remote programs to how many programs that have had security vulnerabilities. Or indeed, how much code.
What percentage of code runs in chrome/firefox's sandbox? 0.0001%?
Have you tried to create a seccomp ruleset for a real program? I have. There are too many variations between machines and code paths that you'll necessarily need to leave wide open doors through your policy. Sure, the more you disable the "luck" you manufacture in case of a bug, preventing exploitation. But no, it's not fit for purpose outside these extremely niche use cases.
Linux is far too bloated to ve run as a secure system and the attack surface of any linux distro, due to the number of kernel modules loaded by default, is very big.
> I built my last company on OpenBSD. It was easy to understand the entire system, and secure-by-default (everything disabled) is the right posture for servers.
That really depends. You could argue a router is a server. OpenWRT has the default of WiFi off for security, which means that if the config is somehow hosed and you have to hard reset the router, you now have an inaccessible brick unless you happen to have a USB-Ethernet adapter on you.
Sensible defaults are much, much better than the absolutionist approach of "disable everything".
Edit: it's so funny to know that all the people slamming the downvote have never hit the brick wall of a dumb default. I hope you stay blessed like that!
> Edit: it's so funny to know that all the people slamming the downvote have never hit the brick wall of a dumb default.
I'll bite. OpenBSD and OpenWRT are different things, and I'm honestly surprised to hear that tech matters enough to you to setup OpenWRT but not enough to own a desktop (or a laptop that doesn't skimp on ports)
They are, but Linux or BSD doesn't matter all that much when it is about the meta case of deciding the defaults.
Funnily enough I feel a BSD is much more suited to modems / routers, if it weren't for HW WiFi support. Yes, I know you can separate your routing and your access point onto different devices.
At any rate I'm just pointing out that that absolutionism is rarely the right answer. It's also pretty telling that people actually went through my comment history to downvote a few unrelated recent comments. People get angry when they have to adjust their assumptions.
As far as computing device goed, I prefer not lugging around a plastic brick. And one is bound to either lose or forget a dongle. In which case you get boned by OpenWRT's dumb default.
The reason for that default is that if they set up an open OpenWRT WiFi (or default passworded, think "OpenWRT2025"), in that split 5 minute window before you change it, some wardriver might login and mess with your network.
Obviously the chances of that are rather insignificant. And they could generate a default password based on the hardware. For the real security nuts they could tell them to build an image without default-on WiFi (currently they do the inverse).
I'm not comparing those, I'm comparing absolutionist vs. flexible attitude.
People are downvoting because I'm making them realize they have to rethink their assumptions, and it is less painful to attack the messenger rather than actually do so. People these days are generally bad at not tying their identity to things and not taking it personal.
Vision Pro is a perfect example of a greed-driven failure. Apple pissed off both devs and megacorps by keeping the ecosystem closed, fighting tooth and nail in courts such that every app needed to pay them 30% and couldn't be installed without their blessing, and unsurprisingly very few massive companies (or hackers) wanted to support Apple's fledgling closed garden. Without software, it's just a gadget.
Tesla announced they are adding it this week. Ford’s CEO expressed glee at GM removing it. There isn’t a CarPlay App Store nor downloads to get 30% from (or if there were, they’d appreciably be enabled by Apple’s platform as we aren’t in the habit of subscribing to or buying apps for our car today), and while we don’t know the licensing terms from the GM removal it sounded like privacy violations and extra subscription revenue are their motivations for dropping CarPlay. That doesn’t sound consumer friendly on the carmakers part at all. I think this field doesn’t line up with the overall thesis, squint as we might.
>Tesla Inc. is developing support for Apple Inc.’s CarPlay system in its vehicles, according to people with knowledge of the matter, working to add one of the most highly requested features by customers.
>The carmaker has started testing the capability internally, according to the people, who asked not to be identified because the effort is still private.
Tesla's news is interesting. A good question to ask in this who's in control in Tesla x CarPlay relationship. The answer is obviously former (Apple can't dictate anything and Tesla gets to boss around).
That's very different from a Toyota x Apple partnership.
So no, those are two different scenarios. The era of Apple controlling the platform is gone. (Except for legacy ones)
People buy Tesla for Tesla and not because CarPlay. But CarPlay is a purchasing decision factor for other brands, which means a power imbalance exists.
So this is a classic game theory situation. You want all participants (Toyota, Honda, Ford) to cooperate (not have CarPlay) and not defect. So participants watch each others move.
If they stick together, all of them stand to win.
If one defect, in the short term they might win but in the long-term Apple will seek to commoditize the car maker.
> People buy Tesla for Tesla and not because CarPlay.
They increasingly just don't buy Tesla. Strong growth in that segment lately.
I recall though, back in 2021 we rented one as a test drive situation. The UX was so horrific I did an immediate 180 on that idea. Hard pass. Carplay might've saved that sale, their stock infotainment is trash.
I wouldn't be surprised if they go all on in Carplay Ultra near the end.
Oh, I'm aware. I have no love for Tesla. I was making an observation of what I see around me (plenty of new Teslas on the road even after Elons shenanigans)
Huh? Apple does not charge for CarPlay. Some automakers are trying to give them the boot, but that has nothing to do with Apple's greed and everything to do with the automakers' greed. They want their own ecosystem of apps.
I'll let you in on a secret. Ask yourself what the business case of CarPlay is. "Why" should Apple do CarPlay. Put yourself in the shoes of a VP at Apple pitching CarPlay. Are they saying "let's invest millions of dollars in inventing the UI for cars and give it away for free, for .. goodwill?"
Nope, the slide deck would say 'Cars are the next computing platform. That's where most people spend time. So imagine is we (Apple) were meaningful present there .. and that's why we need to invest in it'
So, yes CarPlay is a move to control another computing formfactor. One they do not manufacturer (like tv and Apple TV) ...and unfortunately for them, car makers are wiser this time around.
A simpler explanation is that all of these little conveniences add up to keeping customers firmly embedded in the ecosystem, repeatedly buying new iPhones. And sure, if we can offer another environment where an App Store purchase can be used, great.
> unfortunately for them, car makers are wiser this time around
Maybe. Ditching CarPlay does not currently seem like the wise decision, given how many of us have decided that omitting it is a deal killer. I love my Lightning, but I do not for one nanosecond trust that Ford would keep the app ecosystem on my truck running as long as Apple will keep iOS working on iPhones.
I'd argue a missing social safety net combined with grossly inadequate public education, no job opportunities, unaffordable healthcare and housing, and a prison system designed to punish all drive people to take drugs. Drug addiction is just the symptom. Let's focus on giving people real hope and value and meaning in their lives, from birth to death, instead of killing people, without trial, a world away.
The key value of Pebble to me was its incredible C SDK that made it super easy to write custom apps for it. I remember way back I got full turn-by-turn navigation working on it.
“Normal exposure” is doing some heavy lifting in that sentence. Presumably having all your daily texts arrive on such paper wouldn’t be “normal exposure,” which if I recall correctly is handling a receipt for a few seconds a day with only your fingertips.
People make mistakes using bad (but popular) tech all the time. Remember MongoDB when every app needed to be NoSQL for web-scale? Remember when everything was event-driven using Kafka? Remember when every left-pad needed its own microservice?
When large organizations (Facebook, Google, LinkedIn, Amazon) start pushing it, when popular developers blog about it, when big conferences run talks on it, and lots of marketing and ads and sales funded by ad revenue or VC dollars start pushing a tech as “amazing,” it gets adopted by CTOs, becomes a hiring criteria, and suddenly no one wants to admit it’s crap because their entire career depends on it… or more generously they don’t know any better because they haven’t hit the painful edges in production yet, or they haven’t seen how simple things could be with a different architectural decision.
Something being popular doesn’t mean it’s well-suited for a common use-case. Very often it isn’t.
direnv does exactly what you describe (and a lot more) using flake.nix. cd into the directory and it automatically runs. I use it in every single project/repository to set environment variables and install project-specific dependencies locked to specific versions.
reply