>"They have planned these systems for every milliwatt of power that is used to run the satellite, so there is not the power budget on existing systems to run encryption or authentication. It's not practical."
I doubt this. If they were concerned with power usage at the milliwatt level, they would most likely be using custom real time kernels. Encryption is ridiculously cheap in terms of power usage and performance on any modern processor because of intrinsics. I doubt they'd see even a blip in power usage with ChaCha20 unless all their code is hand optimised assembly.
I was under the impression that many of these systems do run RTOS, and that they don't use 'modern' processors, they use rad-hardened processors. Is this not the case?
Satellites larger than cubesats usually use rad-hardened processors, e.g. in Europe the LEON processors are quite common. These are basically rad-hardened SPARC v8 processors. They are also clocked quite low since they are often implemented using a FPGA.
So, a rather old RISC architecture, lacking any cryptography extensions/intrinsics, running at a rather low clock frequency means that it is not that easy to fit authentication and cryptography in software (at least at the necessary rates).
To get authentication/encryption in these systems you need a separate crypto unit or implement AES-GCM/AES-HMAC in FPGA (if you have room).
When selecting components there's a certain range of "quality grades" to choose from ranging from commercial/automotive to space grade and radiation hardened parts. Often times a lower grade component can work with some precautions, for example, with a reset circuit in case of latch up. Many cubesats use non-space grade components because of their high price and are only expected to work for a limited time, e.g., a few years.
Exactly, there is a big difference between multi-million dollar communications/observation satellites and cubesats, which can now be built and flown for under $100k. There is no reason or budget to use radhard chips with that sort of budget, when the thing will probably fail or re-enter the atmosphere before radiation damage matters.
The satellite I'm working on will be LEO, but fairly large, to do earth observation (SAR). The subsystem I'm working on uses a radiation hardened SPARC chip from Gaisler (Leon4). It's a fairly popular choice nowadays. We do use an RTOS, although I've largely been writing bare-metal boot loader code and drivers so it's not my area of expertise. Pretty much all of our buses/interconnects/CPUs/FPGAs are space-grade, with ECC/EDAC memory and nonvolatile storage.
The Gaisler drivers are hideous if you're doing anything more serious than basic interface testing. Good that you are rolling your own. Consider the -mflat compiler option which can improve timing jitter and reduce memory pressure by avoiding spilling of unused registers :D
Can you elaborate a bit regarding the -mflat compiler option? Why does it improve timing jitter etc? I am not sure I understand the description from the GCC docs. Is it disabling the register windows? Isn't that one of the big selling points of the SPARC architecture?
There is a finite number of register windows, usually 8 but only 7 can be used because the 8th serves as "sentinel" to detect over- and underflow.
Once register windows are full (a function call wants to activate the next register window but there is no unused window left) window overflow occurs and a trap handler is activated.
The trap handler "unwinds" the register windows and stores all the contents in memory (stack). Now the next function can continue with an empty set of register windows. Once you return from the function, the contents of the windows have to be restored (window underflow trap).
Problem is that the trap handlers can't know which of the registers in each window were in use. Therefore all have to be saved/restored. This ratio will worsen when you write smaller functions that use less register and nest deeper.
So there are two issues: 1. You can't really know at which point in your program that underflow/overflow occurs because it changes depending on the exact path of execution through the program. 2. Unnecessary memory write/read operations. While ca. 120 x 32-bit words is not that much, with an 8-bit wide SRAM, some waitstates and EDAC this might be noticeable. (Consider that the LEON processors have a data cache for read access but for writing only a "store buffer" that queues few memory writes)
Using -mflat every register is saved by the caller/callee (as ABI demands) on the stack. This means that the memory accesses are predictable and spread out over each function call.
So, my personal conclusion is that register windows are an intriguing idea on the surface but become useless when you aren't writing 80s spaghetti code. There were many similar ideas at that time, e.g., Am29000.
We’d considered using mflat, but we’re not that performance constrained (and prefer the slightly smaller binary size with register windows enabled). I may do some profiling of the under flow/overflow interrupts though since you’ve now got me second guessing myself.
Registers asr22/23 contain a cycle counter that you can use to time stuff. If it's not present, there's a register in the DSU that counts cycles but that requires an access via the AHB bus. You can measure a lot of things with those cycle counters, like context switch and interrupt handling times, memcpy vs naive for-loop, linear vs. binary search on small arrays...
I'd expect a few microseconds per overflow at most but it depends a lot on the characteristics of the system. Of course, if the application is not sensitive to a few microseconds here and a few microseconds there that optimization might not be worth it.
Yes. Though sometimes LEO satellites use standard hardware. Otherwise, rad hard electronics are quite behind the state of the art. A ~100 MHz 32-bit machine with a few dozen megabytes of RAM is typical of recent low-end rad hard space hardware. Still, even that is quite capable of modern public key encryption.
You don't normally use public key encryption in satellites, it is some sort of block crypto. CCSDS (a bunch of standards for space applications that is quite popular in the industry) recommends AES-HMAC for authentication and AES-GCM for both authentication and encryption.
My experience with a LEON processor ~100 MHz is that it is hard to get much throughput out of an AES implementation.
Maybe someone else can chime in because I wasn't aware that ordinary satellites required radiation hardened hardware. At most I thought they'd just put it all into a metal box.
It depends. Cubesats and other small LEO satellites commonly use consumer grade hardware. They're going to deorbit and burn up before the radiation becomes a major problem, and they're often deployed in fleets so it's fine to lose a portion anyway.
Certainly something that applies more to cubesats and similar projects. "Normal" satellites typically carry at least an authentication unit that performs some kind of HMAC checking. You can find some of that stuff here: https://public.ccsds.org/Pubs/352x0b2.pdf
I doubt this. If they were concerned with power usage at the milliwatt level, they would most likely be using custom real time kernels. Encryption is ridiculously cheap in terms of power usage and performance on any modern processor because of intrinsics. I doubt they'd see even a blip in power usage with ChaCha20 unless all their code is hand optimised assembly.