I’m not casting shade on anyone, but “highly reputable engineer” is not how I would describe Wei Dai. “Early thinker in this field, respected for his opinions” might be more accurate.
Especially if you are directly comparing against libsodium and Daniel Bernstein who is a widely respected engineer whose work is widely used and heavily reviewed.
Because extending trust usually works retrospectively?
Or
which battle tested applications exist today using crypto+, that illustrate it's a better choice than what sofar held up under libsodium (which is a lot)?
I personally don’t like that every single cryptography scheme is included in a single library. This creates a fake sense of powerfulness, but breaks down once some more complicated things need to be done such as zero knowledge proofs or homomorphic computations when it can become awkward at best. In a sensible language one makes seperate modules for hashing, cryptographic group library, pseudorandom number generation, cryptographic signatures, various zero knowledge proofs and TLS and uses package manager to install what one needs.
It's good for most use cases for it to be a black box. TLS apis that were designed in the 90s still work with the newest protocols and ciphers and bug fixes. Consumers of the library can blindly and transparently adopt 30 years of research into the topic.
TLS APIs with moving cryptography targets have proven quite useful. I'm only sad that more low-level cryptography never got popular. In a perfect world, you tell the OS what key/trust store to use, what domain to connect to, and you just get a plaintext socket that'll do all the cryptography for you, regardless of whether you're using TLS 1.0 or TLS 1.3.
I know the layered network protocol design is flawed, but I really like the elegance of the old design where TLS/IPSec/DTLS/WireGuard is just an option you pass to connect() rather than a library you need to pull in and maintain. Windows has something like that and it feels like somewhat of a missed opportunity for the internet.
connect(2) and these other kernel mode interfaces are definitely the wrong layer. Should be done in user mode. You also want to be able to replace the transport layer with whatever you want.
I think an opaque layer that lets you poke at internals when you decide you know what you're doing is the way to go. Eg. About 10 years ago I implemented certificate pinning atop a 1990s Microsoft API... you can use the api the "dumb" way and not touch it, and then optionally you can easily switch to implementing features the API vendor never envisioned.
Why? It's better if it's in one place and the overall quality is maintained there, than having different libraries with different authors, quality levels and guidelines.
Small libraries are easier to get into and contribute. Also, let’s say one develops some zero knowledge crypto system with Botan and then latter finds out that their elliptic curve implementation is not that performant. Improving performance of elliptic curves is one of the dark arts that only few know to do hence he decides to wrap one that OpenSSL library provides.
The essential question is whether he would be able to use OpenSSL implementation without changing internals of Botan or his own zero knowledge crypto system implementation. In modular libraries this is less of an issue as itself generally implies working with abstract groups and writing wrappers or implementations outside.
A long list of supported hashes/algorithms is imo an antipattern for crypto libraries. They should focus on being very obviously correct for a small set of supported algorithms. Crypto's hard and this just increases the surface area.
It would be easier, if there would be one standard combo that would be enough (e.g. something similar to the WireGuard). Currently there are many IoT devices that mention TLS-support, but they don't specify e.g. supported ciphers and hash functions.
In the case of TLS, at least, there is a set of mandatory to implement algorithms, so in principle two conformant implementations should be able to interoperate using those. Currently, they are:
- ECDSA with P-256 for signature
- ECDH with P-256 for key establishment
- AES_128_GCM for data encryption with SHA-256 for hashing and KDF
Why? Cause you're a fucking noob and you need C++ written in Go or Java style? Yes, go and suck some of mama Justi's little tit, Johnny boy, whoever the fuck that even is.
If you don't know precisely why you want a library like this (and sometimes you do), you want libsodium instead.
How does Libsodium compares with Crypto++[1] now? Wei Dai [2] is a highly reputable engineer.
[1] https://github.com/weidai11/cryptopp
[2] https://en.wikipedia.org/wiki/Wei_Dai
I’m not casting shade on anyone, but “highly reputable engineer” is not how I would describe Wei Dai. “Early thinker in this field, respected for his opinions” might be more accurate.
Especially if you are directly comparing against libsodium and Daniel Bernstein who is a widely respected engineer whose work is widely used and heavily reviewed.
Daniel Bernstein is not the creator of libsodium. Libsodium is based on his work on <https://nacl.cr.yp.to/> which is not the same.
Because extending trust usually works retrospectively?
Or
which battle tested applications exist today using crypto+, that illustrate it's a better choice than what sofar held up under libsodium (which is a lot)?
I personally don’t like that every single cryptography scheme is included in a single library. This creates a fake sense of powerfulness, but breaks down once some more complicated things need to be done such as zero knowledge proofs or homomorphic computations when it can become awkward at best. In a sensible language one makes seperate modules for hashing, cryptographic group library, pseudorandom number generation, cryptographic signatures, various zero knowledge proofs and TLS and uses package manager to install what one needs.
It's good for most use cases for it to be a black box. TLS apis that were designed in the 90s still work with the newest protocols and ciphers and bug fixes. Consumers of the library can blindly and transparently adopt 30 years of research into the topic.
TLS APIs with moving cryptography targets have proven quite useful. I'm only sad that more low-level cryptography never got popular. In a perfect world, you tell the OS what key/trust store to use, what domain to connect to, and you just get a plaintext socket that'll do all the cryptography for you, regardless of whether you're using TLS 1.0 or TLS 1.3.
I know the layered network protocol design is flawed, but I really like the elegance of the old design where TLS/IPSec/DTLS/WireGuard is just an option you pass to connect() rather than a library you need to pull in and maintain. Windows has something like that and it feels like somewhat of a missed opportunity for the internet.
connect(2) and these other kernel mode interfaces are definitely the wrong layer. Should be done in user mode. You also want to be able to replace the transport layer with whatever you want.
I think an opaque layer that lets you poke at internals when you decide you know what you're doing is the way to go. Eg. About 10 years ago I implemented certificate pinning atop a 1990s Microsoft API... you can use the api the "dumb" way and not touch it, and then optionally you can easily switch to implementing features the API vendor never envisioned.
tbh i'm just happy to see "crypto" and have it mean cryptography.
sic transit gloria mundi or something. :)
Why? It's better if it's in one place and the overall quality is maintained there, than having different libraries with different authors, quality levels and guidelines.
Small libraries are easier to get into and contribute. Also, let’s say one develops some zero knowledge crypto system with Botan and then latter finds out that their elliptic curve implementation is not that performant. Improving performance of elliptic curves is one of the dark arts that only few know to do hence he decides to wrap one that OpenSSL library provides.
The essential question is whether he would be able to use OpenSSL implementation without changing internals of Botan or his own zero knowledge crypto system implementation. In modular libraries this is less of an issue as itself generally implies working with abstract groups and writing wrappers or implementations outside.
I first learned of it because KeePassXC uses it https://github.com/keepassxreboot/keepassxc/blob/2.7.9/cmake...
Same for me, found out what botan was because my nix managed KeePassXC package would not compile. Had to switch to the brew cask for the time being.
A long list of supported hashes/algorithms is imo an antipattern for crypto libraries. They should focus on being very obviously correct for a small set of supported algorithms. Crypto's hard and this just increases the surface area.
I would say that this is true for protocols, tls have been significantly improved by dropping support for many different algorithms.
But its not obvious that the same is true for a library.
The RustCrypto project breaks each algorithm into its own crate, while botan implrments everything.
Its not obvious to me that one approach is clearly superior to the other.
It would be easier, if there would be one standard combo that would be enough (e.g. something similar to the WireGuard). Currently there are many IoT devices that mention TLS-support, but they don't specify e.g. supported ciphers and hash functions.
In the case of TLS, at least, there is a set of mandatory to implement algorithms, so in principle two conformant implementations should be able to interoperate using those. Currently, they are:
- ECDSA with P-256 for signature - ECDH with P-256 for key establishment - AES_128_GCM for data encryption with SHA-256 for hashing and KDF
It's a great, easy to learn library. I used it for Firestr
I just read through the code. I think I'll hold out for Justine to release a crypto library.
Why? Cause you're a fucking noob and you need C++ written in Go or Java style? Yes, go and suck some of mama Justi's little tit, Johnny boy, whoever the fuck that even is.