Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There’s a very serious point here. Cryptographers are and always have been deeply concerned with performance. Some of the most skilled low level optimization people I know work on cryptography. It is only relatively recently that computer hardware has gotten powerful enough that cryptography isn’t a serious bottleneck for production systems. In all the recent new crypto standards, a big decision in the whittling down of the finalists was performance.

If someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use. If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure. Or at least that it doesn’t meet the level of security for general use, which took a cryptographer is approximately the same thing.



Performance is an evolving target. Meta reported they spend ~0.05% (1 out of every 2000 CPU cycles) on X25519 key exchange within the last year, which is quite significant. If that can be brought down, that's worthwhile. And ongoing research and deployment of PQC will make key exchange even more expensive.

> If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure.

Lol that is just not true at all. A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance, which was considered an important practical reality, and was faster than SHA-2 at a higher (but insignificantly so) security margin; their own report admitted as much. The BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.

So why did they pick Keccak? Because they figured that SHA-2 was plenty good and already deployed widely, so "SHA-2 but a little faster" was not as compelling as a standard that complimented it in hardware; they also liked Keccak's unique sponge design that was new and novel at the time and allowed AEAD, domain separation, etc. They admit ultimately any finalist including BLAKE would have been a good pick. You can go read all of this yourself. The Keccak team even has new work on more performant sponge-inspired designs, such as their work on Farfalle and deck functions.

The reality is that some standards are chosen for a bunch of reasons and performance is only one of them, though very important. But there's plenty of room for non-standard things, too.


> Meta reported they spend ~0.05% (1 out of every 2000 CPU cycles) on X25519 key exchange within the last year, which is quite significant.

That is not even remotely significant. Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking. Pretty much anything else they optimize (are they still using python and php?) Will have a bigger impact.


Or just reduce the quality of served photos and videos by 0.05%, probably nobody would notice


> Facebook spends 25% (1 out of every 4) of my CPU cycles on tracking.

That's their core business.


This is significant for Facebook, not for you. I solved this problem for myself by not using Facebook.


> A major point of discussion when NIST announced the SHA3 finalist being Keccak back in ~2012(?) was that BLAKE1 at the time offered significantly better software performance

IIRC, Keccak had a smaller chip area than Blake. Hardware performance is more important than software performance if the algorithm is likely to be implemented in hardware, which is a good assumption for a NIST standard. Of course, SHA3 hasn't taken off yet but that's more to do with how good SHA2 is.

> BLAKE1 family is still considered secure today, its internal HAIFA design is very close to existing known designs like Merkle–Damgård, it isn't some radically new thing.

Given that the purpose of the competition was to replace SHA2 if/when it is weakened, choosing a similar construction would not have been the right choice.


> Hardware performance is more important than software performance if the algorithm is likely to be implemented in hardware

I don't think that's necessarily a given at all, but I grant that's mostly a matter of opinion I guess.

> Given that the purpose of the competition was to replace SHA2 if/when it is weakened

I think the dirty secret hiding there is that I see very few actual expectations SHA2 will ever be broken. Assuming it can be and picking a different secure construction, of course, is a good idea. But even the designers of BLAKE have admitted such and so did NIST.


An additional argument for Keccak was that even if its performance in software implementations was mediocre, it allows very fast and cheap hardware implementations, so from this POV it was definitely better than the alternatives.


> computer hardware has gotten powerful enough that cryptography isn’t a serious bottleneck for production systems ... someone is telling you that we need a new, faster standard for cryptography, and the selling point is “faster"

Google needed faster than standard AES cryptography for File-based Encryption on Android Go (low cost) devices: https://tosc.iacr.org/index.php/ToSC/article/view/7360 / https://security.googleblog.com/2019/02/introducing-adiantum...


This doesn't appear to use LWC constructions though, mostly ChaCha20.


If the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use.

Because the field of cryptography advances? You could have made the same argument about Salsa/ChaCha but those are great ciphers. And now we have Ascon which has the same security level but I guess is even faster.


If these were faster than AES and as strong as AES, they would be replacing AES, not only being used for "lightweight devices unable to use AES"


They're faster than AES on their target platforms. It really feels like people are just trying to run with this out-of-context Matthew Green quote as if it was an axiom.


Rijndael (now AES) wasn't even the strongest finalist in the 2001 AES evaluation. It partially won on dint of being faster on contemporary x86 processors than Serpent or Twofish. Nowadays, it's faster on x86-64 processors because there's dedicated silicon for running it. Modern small platforms don't have this silicon and have different performance characteristics to consider. Also, without that dedicated silicon, implementations tend to be vulnerable to side-channel attacks that were unknown at the time.


> If these were faster than AES and as strong as AES […]

Not everything needs to be as strong as AES, just "strong enough" for the purpose.

Heck, the IETF has published TLS cipher suites with zero encryption, "TLS 1.3 Authentication and Integrity-Only Cipher Suites":

* https://datatracker.ietf.org/doc/html/rfc9150

Lightweight cryptography could be a step between the above zero and the 'heavyweight' ciphers like AES.


NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Same thing with weaker ciphers. They are a target to downgrade to, if an attacker wishes to break into your connection.


> NULL ciphers in TLS are intended to enable downgrade attacks. Nothing else.

Intended... Do any experts think that? Can you cite a couple? Or direct evidence of course.

Unless I'm missing a joke.


Thought this was common knowledge. When TLS1.3 was standardized, it explicitly left out all NULL and weak (such as RC4) ciphers. It also left out all weaker RSA/static-DH key exchange methods, such that easy decryption of recorded communication became impossible. To that the enterprises who would like to snoop on their employees and the secret services who would like to snoop on everyone reacted negatively and tried to introduce their usual backdoors such as NULL ciphers again:

https://www.nist.gov/news-events/news/2024/01/new-nccoe-guid... with associated HN discussion https://news.ycombinator.com/item?id=39849754

https://www.rfc-editor.org/rfc/rfc9150.html was the one reintroducing NULL ciphers into TLS1.3. RFC9150 is written by Cisco and ODVA who previously made a fortune with TLS interception/decryption/MitM gear, selling to enterprises as well as (most probably, Cisco has been a long-time bedmate of the US gov) spying governments. The RFC weakly claims "IoT" as the intended audience due to cipher overhead, however, that is extremely hard to believe. They still do SHA256 for integrity, they still do all the very complicated and expensive TLS dance, but then skip encryption and break half the protocol on the way (since stuff like TLS1.3 RTT needs confidentiality). So why do all the expensive TLS dance at all when you can just slap a cheaper HMAC on each message and be done? The only sensible reason is that you want to have something in TLS to downgrade to.


How exactly do NULL ciphers accomplish enterprise monitoring goals? The point of the TLS 1.3 handshake improvements was to eliminate simple escrowed key passive monitoring. You could have the old PKZip cipher defined as a TLS 1.3 ciphersuite; that doesn't mean a middlebox can get anybody to use it. Can you explain how this would get any enterprise any access it doesn't already have?


> How exactly do NULL ciphers accomplish enterprise monitoring goals?

I don't understand how this isn't obvious. Unencrypted means it is monitorable.


The presence of an insecure ciphersuite in the TLS standard does not in fact imply the ability of a middlebox to force that ciphersuite; that's kind of the whole point of the TLS protocol. So, I ask again.


Your first set of links seems to be about central key logging for monitoring connection contents? If there's stuff about null encryption in there I missed it. And even if there is, it all sounds like explicit device configuration, not something you can trigger with a downgrade attack.


Yes, my first link is about that. It illustrates and explains the push to weaken TLS1.3 that has later been accomplished by the re-inclusion of NULL ciphers.

And all the earlier weaker ciphers were explicit device configuration as well. You could configure your webserver or client not to use them. But the problem is that there are easy accidental misconfigurations like "cipher-suite: ALL", well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents. Proper design would actually just not create a product that can be mishandled, and early TLS1.3 had that property (at least with regards to cipher selection). Now it's back to "hope your config is sane" and "hope your vendor didn't screw up". Which is exactly what malicious people need to hide their intent and get in their decryption backdoors.


The first link is weakening in a way that is as far from a downgrade attack as you can possibly get. And on top of that TLS 1.3 has pretty good downgrade prevention as far as I know.

> well-intended misconfigurations like "we wan't to claim IoT support in marketing, so we need to enable IoT-'ciphers' by default!" and the sneaky underhanded versions of the aforementioned accidents

Maybe... This still feels like a thing that's only going to show up on local networks and you don't need attacks for local monitoring. Removing encryption across the Internet requires very special circumstances and also lets too many people in.


Most modern processors have hardware support for AES, that's why it's fast. ChaCha is significantly faster when run on the CPU


Security standards can move extremely slowly when the security of the incumbent algorithm hasn’t been sufficiently compromised, despite better (faster, smaller) alternatives.

Tech Politics comes into it.


I mean, they are faster and as strong and are gradually replacing it?


Sponges generally? Maybe? LWC constructions not so much?


I thought the “they” being referenced were chacha/salsa.


  If the selling point is “faster,” you’d better wonder why that wasn’t the standard already in use.
Because “fast enough” is fast enough, unless it isn’t.

My desktop CPU has AES in hardware, so it’s fast enough to just run AES.

My phone’s ARM CPU doesn’t have AES in hardware, so it’s not fast enough. ChaCha20 is fast enough, though, and especially with the SIMD support on most ARM processors.

All this paper is saying is that ChaCha20 is not fast enough for some devices, and so folks had to put in intellectual effort to make a new thing that is.

But even further: everyone’s definition for “fast enough” is different. Cycles per byte matters more if you encrypt a lot of bytes.


Only extremely old ARM CPUs (i.e. 32-bit CPUs from more than a decade ago) do not have AES hardware. All 64-bit ARM CPUs have it, like also at least the HW for SHA-1 and SHA-256. The more recent ARM CPUs have HW support for more cryptographic algorithms than the majority of the desktop CPUs.

"Lightweight" cryptography is not intended for something as powerful as a smartphone, but only for microcontrollers that are embedded in small appliances, e.g. sensors that transmit wirelessly the acquired data.


> Only extremely old ARM CPUs (i.e. 32-bit CPUs from more than a decade ago) do not have AES hardware.

I remember when Sun announced the UltraSPARC T2 in 2007 which had on-die hardware for (3)DES, AES, SHA, RSA, etc:

* https://en.wikipedia.org/wiki/UltraSPARC_T2

(It also had two 10 GigE modules right on the chip.)


I think I’m just getting old. I have a collection of KaiOS phones with Cortex-A7s but now they’re over a decade old :(

Newer ones have the Qualcomm 215, which, yes, is 64-bit 4x A53

From that perspective, LWC is only useful on old (existing?) microcontrollers: the Cortex-A320 that came out this year is 64-bit.

Hardware cycles take time, though, and it will be some time before everything is 64-bit!


Not all 64-bit ARM CPUs have AES support since it's part of the optional Crypto extension. The BCM2837 in the Raspberry Pi 4 is an ARMv8 CPU that lacks it.


You are right that the cryptographic instructions are optional and they have always been so in ARM CPUs, in order to enable the making of CPUs that can be embedded in products that will be exported even to destinations where cryptography is forbidden.

However the poster above was talking about smartphone Arm-based CPUs. I doubt that there has ever existed a 64-bit ARM-based CPU for smartphones that did not implement the cryptography extension. Even the CPUs having only Cortex-A53 cores that I am aware of, made by some Chinese companies for extremely cheap mobile phones, had this extension.


"If there is not some novel, brand new algorithm that is being employed, the answer is because it is insecure."

No, this is not at all true.


> It is only relatively recently that computer hardware has gotten powerful enough that cryptography isn’t a serious bottleneck for production systems.

Or we've had enough "spare" transistors and die space to devote some to crypto, hashing, and checksumming instructions. I remember the splash Sun made when they announced on-die crypto hardware in 2007 (as well as on-die 10 GigE):

* https://en.wikipedia.org/wiki/UltraSPARC_T2

* PDF: https://www.oracle.com/technetwork/server-storage/solaris/do...


This thread fails to mention that a cipher has to be somewhat hard to compute or someone with a lot of resources can't just brute force it. Therefore you also want an implementation of a given cipher to be as efficient as possible, so that no future improvement lowers the security of your cipher.


"Lightweight" cryptography is not intended for smartphones, personal computers and similarly powerful devices.

It is intended only for microcontrollers embedded in various systems, e.g. the microcontrollers from a car or from a robot that automate various low-level functions (not the general system control), or from various sensors or appliances.

It is expected that the data exchanged by such microcontrollers is valuable only if it can be deciphered in real time.

If an attacker would be able to decipher the recorded encrypted data by brute force after a month, or even after a week, it is expected that the data will be useless. Otherwise, standard cryptography must be used.


... without, however, creating the impression that Ascon or Xoodyak could be broken by brute force in a week, a month, or a century.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: