For more context: we have historically required the crypto instructions because
1. We make heavy use of sha256 in blobfs, where such content-hashing is on the critical path of loading all binaries, and we believe that in the absence of hardware acceleration, product owners will likely find that their performance requirements are difficult to attain and may seek to resolve those issues by compromising core security invariants, which we do not wish to encourage
2. For protecting mutable storage, both reads and writes go through AES-XTS, either in zxcrypt or fxfs-crypt. For similar reasons, we want AES instructions to be accelerated, so that protection of user data is not something that we are motivated to compromise for performance reasons.
3. In any product, we expect to do TLS for a variety of purposes (software updates, time sync, communications with servers, etc.), and don't want poor TLS performance to be a reason that people are later motivated to avoid TLS/use weak ciphersuites/etc.
Since we do not want product owners to be motivated to disable fairly fundamental security features of the system, we have endeavored to ensure that the hardware baseline is likely to adequately support performance requirements, including through the requirement of the ARM crypto instructions on such boards.
Why wouldn't they catch this before they started work on porting this device to Fuchsia? Why isn't this a problem with the existing OS (Android? ChromeOS?)?
Perhaps they didn’t think the impact was going to be as bad as it was and it turned out it just wasn’t possible to meet their standards as they had hoped w/o requiring crypto hardware.
Or, perhaps, they simply changed their minimum security standards and had to raise requirements as a result.
Security people very often underestimate the performance impact of encryption (especially on embedded devices), so the former would not be surprising to me. Without acceleration, AES hurts and is incredibly hard to implement without opening up side-channel attacks. SHA is also very slow without acceleration. Both have simple, more modern alternatives that are a lot faster on embedded (ChaCha and Blake2 respectively).
Of course, they are also often unwilling to compromise on their chosen security practices to adapt to the situation, even if that just means switching to lightweight/embedded-friendly cryptography algorithms with similar security levels.
Fyi, blake3 was released in 2019 and should probably be used over blake2 unless you have some strong reason not to. It's basically a reimplementation of blake2 with performance tweaks.
This decision reeks of DJB-hating. A lot of NIST types really hate the fact that there are now first-class hashes and symmetric ciphers that perform well without crypto-specific accelerators. This makes bugdooring a lot harder. And then the same guy went and published a curve (Ed25519) whose implementation is nearly impossible to fuck up in ways that leak your secrets.
Lesson to be learned: don't piss off supergenius mathematicians.
Blake vs. Nist is irrelevant to the performance debate, because no matter what you do, they're both going to go approximately the same speed. The debate is why hardcore 256+ bit crypto is needed at all by the operating system primitives on a simple device living in my home, when doing that goes literally 30x slower than using something like crc32 or xxhash. Not to mention that if you're pushing for hard crypto merkle tree stuff, you'd be a fool to do it without things like ecc ram too. Since cosmic rays will blow up your data structures. So you're not just making embedded go slow. You're making it less reliable too. Just to wield ironclad remote control of people's personal possessions.
Is the extra 100 bits of security on your cryptography standard what's actually going to stop them?
In reality, most hacks are stopped by stupid stuff, like closing down ports and proper key management hygiene, not by using completely-brute-force-proof encryption standards. I'll make you a deal: do everything else on that compute-limited embedded device to a practically-uncrackable standard, then you can upgrade to 256 bit crypto. Some devices get there, but no consumer electronics.
Encrypting the filesystem on that specific device is also completely irrelevant to my security as a user - it has more to do with Google protecting itself from copies and users being able to add features.
Third parties who want to exploit the relationship between Google and its users are certainly not users. I don't think it's productive to question their intentions. The better question to be asking here is how or why we allowed our homes to become yet another front in the war between Google and its adversaries. Their business model doesn't jive with third amendment culture. I reject any world in which guests within my home have a legitimate need for their own military grade encryption. That is not the future I'm rooting for.
Someone who feels comfortable letting a tech company control a microphone in their home (and by proxy all the agencies that company is obligated by law to serve) probably isn't too concerned about the local neighborhood prankster getting in on the action too. I think the great fear they have is that someone will come along and install a daemon on that speaker which charges them rent.
> Not to mention that if you're pushing for hard crypto merkle tree stuff, you'd be a fool to do it without things like ecc ram too. Since cosmic rays will blow up your data structures. So you're not just making embedded go slow. You're making it less reliable too. Just to wield ironclad remote control of people's personal possessions.
Can you give me an example of how a bit-flip would blow up a cryptographic merkle tree, but not blow up a non-cryptographic file system?
> Can you give me an example of how a bit-flip would blow up a cryptographic merkle tree, but not blow up a non-cryptographic file system?
Any bit changes anywhere in the merkle tree immediately destroy its ability to self-verify because the integrity is ruined. Basically the entire tree is dead because you can't add new nodes.
Non-cryptographic file systems? A bit-flip changes 1-bit in an isolated place. That one place is ruined but can be easily recovered with checksums or strategies like RAID parity. Detection is harder, though.
Either can be recovered at the block level, though.
I use merkel trees as a solution for a project at work and it has brittleness issues related to data integrity and we have to solution specifically around it.
That’s not a big deal on a multi-core 1ghz processor. Much bigger deal on a single core 200mhz microcontroller that doesn’t have branch prediction or any of those fun optimizations.
I know you're pointing at the general class of hardware performance enhancements including branch prediction, hardware prefetching, superscaler execution, pipelining, etc. But out of all of them, if you get a speedup for any advanced branch prediction beyond just always assuming branches are taken then I'd be very worried your cryptographic algorithm is broken.
This is just speculation that they didn't catch this before.
Maybe they weren't yet decided on lowering their security stance for these specific devices to one similar to the existing OS or skipping these devices. Maybe management was pushing for them to find a solution either way. We can speculate in a lot of different ways.
...what? There were FOUR HUNDRED people working on this thing at G? Quite literally the opposite of the anecdote from the "Androids" book where the Sony (?) execs were confused when the Danger, Inc guys told them Brian Swetland wrote all the code for the T-mobile Sidekick by himself (whereas Sony (?) had teams and teams of people for the same stuff in their offerings).
The project seemed super bloated. I remember thay had at least one person who seemed to be working full time on a clone of vim which iirc was considered part of the OS.
You’re referring to Raph Levien’s work on Xi [0]. Not really just a vim clone. In Fuchsia, iirc, it would have been the basis of all text editing services. If nothing else, it seems to have popularized rope data structures [1] for newer text editors.
No doubt the project generated some interesting research and was pushing the boundaries, but its an explanation as to how a smart speaker OS could take 400 developers.
Fuchsia was not meant for just embedded devices. I recall there being a sentence in an old description (which I can no longer find mention of) which explicitly called out PCs and smartphones as targets. That’s likely still the ambition.
The full picture is apparently that after years of full time work by 400-person team Fuchsia is more like "nothing OS" instead of "a smart speaker OS"...
Its build by Google so im not sure it will get mainstream or just killed. But calling it "nothing OS" is big understatement, When was the last time a new OS from scratch was created? or if we comparing it to linux/Windows/Mac Im sure a lot people work on these
There are college classes where you write an OS from scratch in a semester. The basics of a modern OS are not difficult. The vast majority of the effort in any OS is all the different hardware you need to support.
If we're talking about commercial operating systems then Windows NT comes to mind. Here's what they say:
"In 1988, when the project first began, the team comprised about 20 engineers. By the time the first version of Windows NT shipped five years later, the team had expanded to about 150 engineers as it battled constraints and tradeoffs."
But I think comparing these team sizes is not very meaningful without at least a cursory look at what is part of the project and what isn't. Is it just the kernel? Does it include an entire graphics pipeline, a UI toolkit, a network stack, device drivers?
He probably meant "usable OS fit for mass market adoption". Of course one person can develop a bare bones OS, but this will be missing so many things compared to established systems, while providing too few advantages, that any wide adoption is impossible. Even Microsoft, with well over a thousand (IIRC) Windows developers, had to scale back their Longhorn plans substantially.
Far as I remember, all of them engage in some ridiculous wheel handcarving projects. Just because Google does it badly doesn't mean the others are more reasonable in catering to their NIH syndrome.
My knowledge can and probably is very out of date given how many of Google's projects are publicly realized. But my impression was that Fuschia is a general purpose OS and was planned (like, very long term planned) to eventually become the OS of choice for some Android and Chromebook devices. A project that lofty makes sense to have 400 engineers on.
Canonical with all that people is making a very mediocre desktop experience.
So doesn't seem surprising making something actually good would take a lot more.
Desktop experience is generally not the same people as the OS. Desktop experience is a lot more complex than an OS (both are complex in different ways though)
Most of that was when the team was pretty tiny. It was fun starting from when the kernel was just beginning to run userspace code. I'm still very happy with how the syscalls turned out. If I did it again, I'd stick with a (small) monolithic kernel though -- makes a lot of things simpler.
vDSO doesn't provide a security boundary. vDSO basically provides a pure-userspace fastpath for syscalls, only making the real syscall if necessary. It's great for low-overhead read-only calls that cache well and that you're always allowed to do, like clock_gettime(2) -- but not much more. You can't implement all syscalls as vDSO; if it's a vDSO the goal is to not make an actual syscall at all.
Fuchsia might use vDSO-style things more as a way to replace the glibc-style syscall stubs, abstracting away the actual syscall ABI? That doesn't remove the actual syscall.
> why don't linux use vDSO for more things?
vDSO is much more complex to manage than traditional syscalls, can't be used for anything except pure read always allowed things, etc.
As for optimizing syscalls, it seems things are moving more toward io_uring and ringbuffers of messages going in/out of the kernel, with very few syscalls made after setup.
The intent behind the vDSO style interface for syscalls in Fuchsia was primarily to avoid baking specific syscall mechanisms into the ABI, hopefully to allow future changes to the mechanism without breaking binary compatibility -- which was defined as ELF linkage against libzircon.so.
"Need" is a binary concept, that's not how project planning and prioritization work in the real world. There are a lot of things that are not "needed" that are beneficial, and a lot of things that are beneficial that you end up not having resources to implement. So then it is all about tradeoffs.
Software projects can derive massive economies of scale from a large install base, since there is no marginal cost. A larger install base lets you amortize the fixed cost over more users. The more users you have, the more useful but not strictly need optimisations and features you can justify implementing.
That's what I've heard too: some high-up engineer is bored but Google wants to keep their talent on staff, so they hand them a shiny, exciting computer science playground with grand aspirations.
These numbers often get inflated because the people in change count every part-time worker as a full team member (bigger headcount = bigger promotion!). But it's still a crazy number. At every level everyone is incentivized to bloat the headcount as much as possible.
I don't think part time work in SWE is common at all. Especially at Google.
The 400 number is likely including product owners, business analyst, designers, etc into those numbers not strictly being SWEs (happy to be proven wrong).
For a comparison, initial Windows NT 3.1 had 340 devs/testers, NT 3.5 had 530 devs/testers. No idea why product/program managers aren't listed. There must have been some.
That's good to know, honestly surprised to read that NT had that many. I'd figured early versions wouldn't require that many people. Now knowing the Fuchsia numbers, I don't know if that's a signal to how productive the team is or if they dwindling resources.
I've been assuming for several years now that they will never end up killing either Android or ChromeOS in favor of Fuchsia. The reasons aren't technical but business related.
The business reason for this is that this would alienate the OEM ecosystem. The likes of Samsung want less Google influence, not more and they're really invested in Android. Without Samsung on board, Google's choice is letting them take over Android or keep control on their side. It's that simple. There are also various Chinese manufacturers that already cut loose from Google for legal reasons that are running Android forks. Amazon has its own fork. So, Google has their work cut out forcing that ecosystem in the direction of Fuchsia.
With ChromeOS, they have a similar issue. Lots of OEMs and it's actually a relatively successful platform. IMHO they should push to merge the ChromeOS and Android ecosystems more. Fuchsia does not solve a problem any OEM has.
It's Google's big not invented here syndrome. They started doing an OS because they had some technical concerns with Linux. Instead of working with the Linux community to address those concerns, they've been building their own OS for years now. They'll ultimately probably do the easy and obvious thing which is to write off the whole effort. At best a lot of the components (minus the kernel) might find their way into Android/ChromeOS and their UI frameworks (jetpack compose and flutter).
Why indeed. Big companies being rational and efficient are not that common. My guess is that there are some ongoing internal differences of opinion on this that are causing Google to slowly strangle but not quite kill this effort. Clearly the camp that was going to steam roll Fuchsia through as the Android replacement seems to be no longer in charge. At least, I see no signs of that happening any time soon.
It's a weird dynamic that big companies have where different camps are powerful enough to frustrate each other's roadmaps but not powerful enough to outright kill each other. Stronger leadership would be more decisive and act a lot sooner too. Google has no strong leadership is the only conclusion and there are probably more out of control teams like this.
I've seen the same dynamic in Nokia fifteen years ago where different parts of the org tree were fighting over who got to deliver X where X was some important feature. The most extreme version of this that I saw was when our group in Nokia Research was working on a feature related to coupons and vouchers and started talking to different business units to see if there was any interest in this. We started making an inventory of different teams working on similar things. We stopped counting at seven. Most of these teams did not know of each other or if they did were working on it because those other teams were in the wrong part of the org tree and it was just easier for them to work around each other.
I bet there's a lot of that happening in Google right now. At least they look similarly big and bloated to me.
These are technically different projects. The kernel is named Zircon and the operating system is Fushia. The user interface and applications are written in Flutter which uses Dart.
I've never been able to make sense of why people are having such a hard time spelling Fuchsia correctly[0] but this still made chuckle. :)
[0]: Just think of a popular four-letter word starting with "fuc". Replace the last letter with an "h". (The "ch" in German "Fuchsia" is pronounced exactly how that four-letter word ends.) Then append "-sia".
It clicked for me when I learned that the plant the colour is named after was itself named after a Dr. Fuchs. So now I remember the name, and tack "-ia" on the end. In German, Fuchs sounds something like "fooks", so it's "fooks-ee-ah".
Another project designed to get L7-9s promoted at Google. Ambition and the ability to wave hands are all you need. I like cool stuff, like capabilities, as much as the next person, but nobody needs a new OS. Linux will evolve sufficiently anyway. A new OS isn’t compelling for users.
> At that time, Fuchsia was never originally about building a new kernel. It was actually about an observation I made: that the Android team had their own Linux kernel team, and the Chrome OS team had their own Linux kernel team, and there was a desktop version of Linux at Google [Goobuntu and later gLinux], and there was a Linux kernel team in the data centers. They were all separate, and that seems crazy and inefficient.
> The architectures were all different, which meant that outside developers couldn’t actually do work and attack all the platforms Google was offering. You had to do bespoke work.
> They were all separate, and that seems crazy and inefficient.
From an organizational-design perspective: That's the whole point.
Decentralization means they can all move more quickly — each team can cater to the needs of its local customers. In moving quickly and solving local problems, they end up repeating some work. That's the price of velocity. Good local leaders should counter-balance this by creating informal forums for idea exchange and cross-pollination.
Sadly, overly-ambitious office-politicians see this happening, flag it as "inefficient," and roll the headcount up into a centralized team. Centralization puts an end to all the local ideas and excitement, instead routing it through a centralized committee that "plans" and "decides" which customers are worthy of how much innovation.
In a way, this is the pendulum of life at a big company. But in another way... this pursuit of "efficiency" sure seems like it explains a lot of why Google moves so slowly.
>From an organizational-design perspective: That's the whole point.
There's more to it than that. Unlike most other projects, these kernel teams were working on the Linux kernel, which is an open-source project not under the control of any one corporation. These teams were simply maintaining their own kernel forks (which are undoubtedly optimized for their use-cases) and developing features/patches which they might try to submit upstream. And a lot of the work was undoubtedly device drivers for the particular hardware they used. Since they were working on very different projects (datacenters, Android, ChromeOS), with very different hardware, there was probably almost no overlap going on here.
Yep! Bulkheads to avoid your tasks getting mixed in with everyone else's and subjected to a global stack ranking of priorities, where they might never actually get done.
> The architectures were all different, which meant that outside developers couldn’t actually do work and attack all the platforms Google was offering. You had to do bespoke work.
So they decided to build a bespoke OS that would take a decade to build and would require bespoke work from those devs anyway?
Yes it is. Not directly, of course, nobody cares about the OS. But in case of Android distributions, if they could exchange the Linux layer to Fuchsia, while keeping most of the Android userland in terms of UX, and deliver gains such as increased battery life, then suddenly people would be interested. It would be similar to Apple's development and adoption of ARM architecture.
Agree on the first point, L7+'s are often looking for technical ways to transform the business.
Fuchsia and Dart are still big technologies though touching many other projects, so back-tracking on speakers doesn't quite mean fuchsia is a failure/not necessary.
Based on the large discrepancy with levels.fyi, I'm betting that person either put down their total comp as salary, or they are a very specialized researcher in an extremely hot area (probably AI).
I'm a user (and in particular a Linux user) and a new OS which has Chromium ported to it is compelling to me because I expect that Linux will never evolve to be secure enough.
Linux security (another desktop user here) is... fine? You can get pretty great security on it, if you want, by running Wayland instead of X11 and running all your apps via Flatpak or Snap (assuming you only choose apps that aren't intentionally allowed to break out of their sandbox).
And none of these things require Linux (the kernel) to change all that much. Hell, Android has decent security by running all apps with different user IDs and restricting the parts of the filesystem it can access.
Obviously your default Linux desktop install doesn't work like this, but some motivated individuals could do it, if they wanted. Writing an entire new OS is hardly necessary. Google just wanted something it could control, when it comes to Fuchsia. And they're the last company that I would trust to build an OS that doesn't end up being user-hostile.
Flatpak and Snap are an inherently bad model; they break stuff and fail to secure other stuff, and their design means this will always be a game of whack-a-mole. And even if you were to Flatpak/Snap-ify all your apps (really? Would you run e.g. Bash that way?), you'd still be left having to run a bloated kernel to support a bunch of cruft that you weren't using. Starting from scratch with a clean interface seems like a better approach.
(Qubes, OTOH, is interesting; I see it as one possible path to the future)
> you'd still be left having to run a bloated kernel to support a bunch of cruft that you weren't using
Would you? Linux is, in fact, quite modular, and perfectly happy to not load things until asked to (often as a result of hardware being discovered or plugged in).
> Sandboxing is a much better approach to security than the Unix permissions model, which is nearly obsolete.
That's an incredibly low bar. They're a bad model because the Unix API surface is huge and underspecified; there are a zillion different ways apps could potentially interact with each other. Some of them are giant security holes. Some of them are vital to some obscure corner of app functionality. Most of them are both.
I'm not talking about syscalls, but that in modern times everything of value is in your account, and computers are rarely shared.
So when you download a game your threat model isn't that it might want to mess with the kernel, but that it's going to steal all your data from your browser cache. Dealing with that requires some sort of sandboxing.
> I'm not talking about syscalls, but that in modern times everything of value is in your account, and computers are rarely shared.
Huh? No-one is advocating using user accounts for security.
> So when you download a game your threat model isn't that it might want to mess with the kernel, but that it's going to steal all your data from your browser cache.
Why do you think this contradicts anything I said? And what mechanism do you think it's going to use to do that? Maybe not syscalls in the narrow sense, but certainly via one of the "zillion different ways apps could potentially interact with each other".
> Dealing with that requires some sort of sandboxing.
Is this the "something must be done" fallacy? "We need some kind of sandbox, snap/flatpak are some kind of sandbox, therefore we need snap/flatpak".
There is more to them than sandboxes though. They also package everything, if there is a security problem in a common library you are vulnerable in each sandbox until it is updated so updates become a much larger hole.
Sure the problem cannot escape the sandbox, but in the end i'm more concerned with my data in the sandbox.
It’s not necessary to include every library inside the package. Flatpak packages use runtime packages like Gnome, Freedesktop and KDE. Those runtime packages already include libraries like OpenSSL,libpng, libjpeg, gtk, qt and are updated very frequently to fix the bugs in those libraries.
I think the world is destined to move to a microkernel OS eventually. I am just mad that there is not national level funding of sel4, which is audited and available today.
> You can have secure monolith and insecure microkernel.
Superficially, that claim sounds reasonable, but I'd have a hard time substantiating it. There is exactly one secure microkernel ready for production use today, and zero secure monolithic kernels. There's not even a "here's how one could secure a monolithic kernel in principle" whitepaper.
OpenBSD mostly operates on a hobbyist's idea of security, where they just make up attacks to defend against rather than being systematic or actually keeping up with the outside world. (https://isopenbsdsecu.re)
Although professional security people are also rather bad at this and like to eg stack multiple cool-looking mitigations even if they break each other. (https://siguza.github.io/PAN/)
Okay, so what threat model successfully attacks an OpenBSD box and fails on... I assume the one "secure" microkernel is seL4? Since OpenBSD only defends against made up attacks, this should of course be very easy to demonstrate.
Microkernels work great for virtualization guests, yet they aren't taking off in this kind of low friction environment. I don't think there's a killer app for them that justifies leaving behind Linux.
Android (well, the Pixel anyway) does a great job of isolating processes by using SELinux and seccomp. You can have processes running as root that are still completely isolated away from most of the system.
Even if what you say makes sense, there are maybe around three people in the world who are interested in a new OS because of this reason, which is not really the "users" that people talk about.
Huh? Linux solved a real problem, at the time there really wasn't an alternative (386BSD was under a lawsuit cloud, GNU didn't run, Minix didn't perform well enough for daily driver use and was opposed to the kind of changes that would be needed to fix that).
I’ve mentioned this before, but this is actually a godsend feature in their named product offerings in GCP. Amazon’s product names make no sense and it’s impossible to keep track of everything if you aren’t working with it all the time. Conversely, GCP’s product name is literally just what the product is. Nothing more. Google Cloud DNS vs Route 53. Amazon RDS vs Google CloudSQL. Cloud Pub/Sub vs SNS/SQS. Etc etc
♪ Rekognition, Fraud Detector's cool,
Wavelength, Blockchain, Well-Architected Tool.
Though half of 'em start with "Amazon", for reasons we can't guess..
These are the major services of AWS. ♫
> Route 53 is arguably one of the more obviously named AWS offerings.
Er, then please humor my ignorance: What is it named for? Because the name means nothing to me, and even some light searching just turns up an actual highway and I still don't see what it would have to do with DNS.
Edit: Okay, I do feel a bit foolish for forgetting that DNS is traditionally port 53, so now half of the name makes sense to me. The other half still seems like a more or less random choice to me.
Route 53 sounds like a service to forward DNS requests to the right DNS server (misapplying L2 naming of "route" for what sounds like proxying). Routing DNS does not sound like an authoritative DNS server.
What next, a HTTP server named Route 80 (or Route 443)?
In that case can I please have my 1st gen Nest Hub back on the pre-Fuchsia OS please? Ever since the "upgrade" it's been more laggy and requires semi-regular power cycles, as if there's a memory leak somewhere.
When it launched, we shipped it with an HTML/TypeScript based UI. It sold well, and got excellent reviews
So of course a few moments later it was insisted that the whole thing had to be rewritten in Flutter/Dart. Because reasons.
But of course Flutter didn't exist for the platform. So that had to get written too.
Which also meant somehow writing things like screen reader, and other accessibility features, which are services provided normally by the OS for other platforms (Android, iOS) that Flutter ran on. And which we had just finished porting to Cast OS from Chrome OS.
And of course they insisted that because, I dunno, Flutter was native or something it would just be faster than that dodgy HTML stuff. Nevermind that thousands and thousands of engineering hours have gone into the Chromium graphics stack, and my coworkers had made it perform very well on the little old outdated cheap SoC in the thing...
And while this was all happening, simultaneously people were working off in Fuchsia land with seemingly unlimited headcount, hoisting the whole thing into Fuchsia. And claiming they'd be done any minute now. But then, of course, late for multiple years.
All along the Cast OS team was deprived of product roadmap or headcount, to maintain the existing thing... which we had shipped into customers homes successfully and gotten excellent reviews.
Anyways, I wasn't central to this or anything, most ephemeral. But ephemerally flabbergated and frustrated.
Happy to hear you enjoyed the earlier experience :-)
> Importantly, the Nest Hub series of smart displays are entirely unaffected by this change. Those devices will continue to run Fuchsia under the hood and will continue to receive updates as normal.
Is it just me or Fuchsia sounds more and more like vaporware the more I read about it? Did Google never consider that Fuchsia might become to Android what Plan 9 is to Unix?
Here's the story behind Fuchsia told to me by a Google employee:
One day a bunch of senior engineers wanted to quit. Google asked them why and they answered "we're bored." So google asked them "what would you rather work on?" and they replied "write an OS" and so google said "here's more money. do whatever you want."
Fuschia isn't a serious project. It was busy work to keep top employees from leaving for competition. I was also told that no one inside google seems to know or care about Fuchsia. It only ever was used in the smart speaker. And reading through the API and syscall interface its clear no one is serious about writing an real OS.
Well just because they were good at one thing doesn't mean they are good at another. I'm sure google was initially confident in their assertion but the current state shows that it is a dead end.
For more context: we have historically required the crypto instructions because
1. We make heavy use of sha256 in blobfs, where such content-hashing is on the critical path of loading all binaries, and we believe that in the absence of hardware acceleration, product owners will likely find that their performance requirements are difficult to attain and may seek to resolve those issues by compromising core security invariants, which we do not wish to encourage
2. For protecting mutable storage, both reads and writes go through AES-XTS, either in zxcrypt or fxfs-crypt. For similar reasons, we want AES instructions to be accelerated, so that protection of user data is not something that we are motivated to compromise for performance reasons.
3. In any product, we expect to do TLS for a variety of purposes (software updates, time sync, communications with servers, etc.), and don't want poor TLS performance to be a reason that people are later motivated to avoid TLS/use weak ciphersuites/etc.
Since we do not want product owners to be motivated to disable fairly fundamental security features of the system, we have endeavored to ensure that the hardware baseline is likely to adequately support performance requirements, including through the requirement of the ARM crypto instructions on such boards.
-- https://fuchsia-review.googlesource.com/c/fuchsia/+/808670?t... marking the relevant CPU as not supporting crypto instructions.