If Rust or Haskell really make these issues far less prevalent, why not use them? Genuine question, I know nothing is so simple.
Elliptic curve was a minefield of patents when Sun Microsystems carefully crafted the OpenSSL implementation to avoid any patent infringement. The resulting source code was vetted by their attorneys.
If any of those patents are still in force, then a naive implementation could infringe.
"Sun Microsystems has donated ECC code to OpenSSL and the Network Security Services (NSS) library..."
A lot more attention was likely paid to this code than to heartbleed.
If you run "openssl ecparam -list_curves" on a RHEL clone, you will only see p-256, p-384, and e-521 (and e-521 was only added in v7).
If you build libressl and run the same command, there are dozens (and Canada doesn't allow software patents, so it's legal there).
On OpenBSD 7.1, I see this result:
$ openssl ecparam -list_curves | wc -l
Here's a Canadian patent on accelerated finite field operations on an elliptic curve . Here's one on public key cryptography using elliptic curves .
Tomas Mraz 2007-10-05 10:23:00 UTC: "They are intentionally removed due to possible patent issues."
Bill McGonigle 2013-04-11 06:26:53 UTC: "I've read that Sun's ECC code (that went to OpenSSL) was developed to specifically avoid Certicom patents, but I've only seen that asserted, not proven."
Jan-Frode Myklebust 2013-10-07 09:18:49 UTC: "Is this now solved in the RHEL6.5 beta... only the nistp256 and nistp384 curves are supported."
The original core, SSLEAY, Was an exemplary instance of somebody outside the core cryptographic community (Eric was, IIRC working as a systems programmer in a psychology department at UQ) -And was hand coded to be both algorithmically faithful to export restrictions (ITAR) and machine code optimisations: it was FAST. The code had to implement both the export restricted brainded reduced keylength and the "illegal to export" algorithms.
Peter Guttmans library was known in some ways to be "cleaner" -But didn't gain traction.
A lot of OpenSSL is history.
Recoding in a type safe language, and with a mind to risks is good. But remember, another attack pattern is differential analysis: Code in crypto has to do some things like present equal cost CPU burden across different paths, to defeat attacks including the side=channel of finding hotspots in the VLSI mask and working out what it does from the leakage of information there.
Its complicated. Rust or Haskell Alone, won't make something like OpenSSL inherently risk-free, it may introduce new risks in closing off these ones.
Still worth discussing. Just not necessarily a no-brainer.
In high level languages like Rust, the compiler does not prioritise trying to emit machine code which executes in constant time for all inputs. OpenSSL has implementations for some primitives which are known to be constant time, which can be important.
One option if you're working with Rust anyway would be use something like Ring:
Ring's primitives are just taken from BoringSSL which is Google's fork of OpenSSL, they're a mix of C and assembly language, it's possible (though fraught) to write some constant time algorithms in C if you know which compiler will be used, and of course it's possible (if you read the performance manuals carefully) to write constant time assembly in many cases.
In the C / assembly language code of course you do not have any safety benefits.
It can certainly make sense to do this very tricky primitive stuff in dangerous C or assembly, but then write all the higher level stuff in Rust, and that's the sort of thing Ring is intended for. BoringSSL for example includes code to do X.509 parsing and signature validation in C, but those things aren't sensitive, a timing attack on my X.509 parsing tells you nothing of value, and it's complicated to do correctly so Rust could make sense.
The usual way to create constant-time code for C is to inspect the output assembly for a number of (compiler, options, host system, target system) tuples and verify that it will take constant time on all of them. Even that isn't enough in general, since there are other side-channels. The "Hertzbleed" attack exploits variable execution time in "constant time" code due to CPU dynamic frequency scaling being dependent on the input (secret) data. That effectively means that power side-channels are remotely observable.
Sure, I hoped that sort of thing was implicit in what I wrote, some people do it, perhaps they should not, but they clearly feel like it's their best option. In particular for the context: writing this code in Rust doesn't help and would usually make it harder.
If we don't want to hand roll machine code, maybe somebody should make yet another "it's C but for the 21st century" language with constant time output as a deliberate feature, like maybe the const flag on your functions means produce constant time machine code or error - rather than "You can execute this function at compile time". (Not necessarily a serious syntactic suggestion, just spit-balling).
More likely is that cryptography-specific instructions (like AES-NI or ARM's SHA hash instructions) will get added for more relevant operations.
And it's also quite possible to writing timing resistant code in Rust. Rust is not as high level as people think, it lets you get right down to the machine level with no issue.
As such, if you're using a standard compiler for a language to implement a constant-time guarantee, you need to be doing verification of the resulting assembly to make sure that it actually is constant-time. If you're not doing that, or you're not verifying the generated assembly itself, your constant-time guarantee is not worth the paper it's printed on. Even if it's not even printed on any paper.
The bug is either in an assembly language sequence, or at most it can be due to incorrect use of compiler intrinsics.
Changing the programming language cannot eliminate such bugs. Only either a much more clever compiler, able to use efficiently all the existing instructions implemented by the CPU, thus removing the need for using assembly language, or a much higher-level kind of assembly language, could help against such bugs.
A more practical method would be to always use extensive fuzzing tests for all such functions written in assembly language.
As a developer you should use prefer libraries written in safer languages (Rust, Go, etc.). But that's not always possible given business/environment/etc constraints.
But yeah, for new projects, choose memory safe languages.
Rewrite it in Rust? That might be worth it in the long run, but it would be a considerable effort that would likely take a long time to become a feasible replacement for openssl. E.g., it seems likely to me that it would suffer from more bugs until it reached a certain level of maturity.
It would take a focused, long-term, sustained effort by experts to achieve and would still have many ways it could fail.
I would guess we'll get a chance to find out if this could work, though. I think someone must be attempting this already.
Of course, then the next level of "safe" language will come out, with more guarantees, and we think about rewriting to that.
Personally, I think a more productive, much shorter path, would be to use a safe layer on top of the existing C language, and port the existing openssl to that.
Even alternatives in C or C++ don't get a look in. GnuTLS? Mozilla NSS? Libtls out of openbsd? Libretls port? Nobody cares.
Even though the OpenSSL API is terrible, nobody wants to support multiple TLS backends in their application. Particularly if they cross platforms.
This is one reason I personally believe standardizing good APIs is more important than implementation.
Note that the bug is only in 3.0.4, which was released June 21, 2022. So if you didn't update to this version, it's unlikely you're vulnerable.
Thankfully I can't imagine anyone using AES-OCB.
So I would say (a) OCB is widely used, at least by the ~million Mosh users on various platforms, and (b) this episode somewhat reinforces my (perhaps overweight already) paranoia about depending on other people's code or the blast radius of even well-meaning pull requests. (We really wanted to switch over to the OpenSSL implementation rather than shipping our own, in part because ours was depending on some OpenSSL AES primitives that OpenSSL recently deprecated for external users.)
Maybe one lesson here is that many people believe in the benefits of unit tests for their own code, but we're not as thorough or experienced in writing acceptance tests for our dependencies.
Mosh got lucky this time that we had pretty good tests that exercised the library enough to find this bug, and we run them as part of the package build, but it's not that farfetched to imagine that we might have users on a platform that we don't build a package for (and therefore don't run our testsuite on).
Progress is being made on replacing OpenSSL in a lot of contexts (specifically, the RustCrypto folks are doing excellent work and so is cryptography), but there are still plenty of areas where OpenSSL is needed to compose the mostly algebraic cryptography with the right wire format.
Edit: I forgot to mention rustls, which uses ring under the hood.
TLS 1.3 specifies which curves and ciphers: https://en.wikipedia.org/wiki/Transport_Layer_Security#TLS_1...
(IDK what the TLS (and FIPS) PQ Algo versioning plans are: 1.4, 2.0?)
Mozilla [Open]SSL Config generator: https://ssl-config.mozilla.org/
"BoringSSL, LibreSSL and the OpenSSL 1.1.1 branch are not affected. Furthermore, only x64 systems with AVX512 support are affected."
> the vulnerability has only existed for a week (HB existed for years) and an AVX512-capable CPU is required.
So I'm guessing the real world impact here is near zero?
What systems or distros are shipping this week old version already?
As for the AES OCB bug, it sounds like something that's effectively not used at all in practice, which might explain why it's stayed unnoticed for so long.
I tend to err on patch often and worry about fallout afterwards. All software vendors I deal with (MS, Canonical, Arch, Gentoo, Debian, RPi, Novell err SuSE etc) all do a decent job.
Fixing something like dialogue boxes going weird is one thing. Faking a kicking out of a bunch of Russians out of your honeypots is another thing.
> Note that on a vulnerable machine, proper testing of OpenSSL would fail and
should be noticed before deployment.
so is 'proper testing' included in the default build script or..?
It sounds an awful lot like “you’re responsible for catching our screw-ups” and it’s a bit rich to tell people to do proper testing while the project itself failed to do so before letting this land.
To be vulnerable, you need to build on a non-vulnerable machine which passes the built in tests, then you need to deploy to a vulnerable one and finally you have to not verify that the deployment works.
Absolutely not what you are implying.
Debian for example shipped vulnerable packages: https://security-tracker.debian.org/tracker/CVE-2022-2274
If you build on an earlier machine where the tests pass and deploy to a later one and then don't check that the deployment works, you are at risk.
I think proper testing covers both options.
If you value security, then I'd prefer chcha20-poly1305. If you need speed, then use what your CPU gave you.
"The GCM slide provides a list of
pros and cons to using GCM, none of which seem like a terribly big deal, but
misses out the single biggest, indeed killer failure of the whole mode, the
fact that if you for some reason fail to increment the counter, you're sending
what's effectively plaintext (it's recoverable with a simple XOR). It's an
incredibly brittle mode, the equivalent of the historically frighteningly
misuse-prone RC4, and one I won't touch with a barge pole because you're one
single machine instruction away from a catastrophic failure of the whole
cryptosystem, or one single IV reuse away from the same."