> "How do they accomplish their goals with project BULLRUN? One way is that United States National Security Agency (NSA) participates in Internet Engineering Task Force (IETF) community protocol standardization meetings with the explicit goal of sabotaging protocol security to enhance NSA surveillance capabilities." "Discussions with insiders confirmed what is claimed in as of yet unpublished classified documents from the Snowden archive and other sources." (page 6-7, note 8)
There's long been stories about meddling in other standards orgs (both to strengthen and to weaken them), but I don't recall hearing rumors about sabotage of IETF standards.
tptacek 9 hours ago [-]
These rumors, about IETF in particular, predate the Snowden disclosures.
Almost immediately after that happened, a well-known cypherpunk person accused the IETF IPSEC standardization process of subversion, pointing out Phil Rogaway (an extraordinarily well-respected and influential academic cryptographer) trying in vain to get the IETF not to standardize a chained-IV CBC transport (this is a bug class now best known, in TLS, as BEAST), while a chorus of IETF gadflies dunked on him and questioned his credentials. The gadflies ultimately prevailed.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency. This is a common failure of all standards organizations (W3C didn't need any help coming up with XML DSIG, which is probably the worst cryptosystem ever devised), but it's somewhat amplified in open settings like IETF.
hulitu 5 hours ago [-]
> The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency
The 3 letter agencies usually recrut people from academia and sensitive organizations in order to pursue their agenda.
tptacek 5 hours ago [-]
This would be a snappier comeback if it wasn't the academic cryptographer who was right, and the random people in non-sensitive organizations who shot the right answer down.
AdamN 4 hours ago [-]
That's sort of not saying much. Are they going to recruit Joe the Plumber to submit IETF drafts?
networkchad 2 hours ago [-]
[dead]
mike_d 6 hours ago [-]
DNSSEC is a great example of just how poorly committee designed crypto is. The government doesn't need to do anything to standards, they can just let them play out as-is.
We ended up with a huge mess that solves no actual problems, introduced problems that never existed before, and so confusing in what it does that people still blindly defend it.
SenAnder 1 hours ago [-]
Why do you assume the government didn't do anything? If anyone on those committees was paid or influenced by the NSA/CIA, they would not have disclosed it.
anonym29 9 hours ago [-]
I wonder what other institutions are infested with swarms of gadflies that'll all swear up and down in unison that something is definitely good or bad, will attack outsiders with narratives that disagree with their own, and weaken the credibility of the entire institution in the process.
Surely this can't be a phenomenon unique to technology, can it?
tptacek 9 hours ago [-]
It's just human nature. I refuse to participate in standards work, partly because it's much more pleasant throwing rocks from a safe distance, but also because I recognize in myself the same ordinary frailties that had friends of the original IPSEC standards authors writing mailing list posts about Rogaway being a "supposed" cryptographer. There but for the grace of not joining standards lists go I.
I think --- I am not kidding here, I believe this sincerely --- the correct conclusion is community-driven cryptographic standards organizations are a force for evil. Curve25519 is a good standard. So is Noise, so is WireGuard. None of those were developed in a standards organization. It's hard to think of a good cryptosystem that was. TLS? It took decades --- bad decades --- to get to 1.3. Of that outcome, I believe it was Watson Ladd who said that if you turned it in as the "secure transport" homework in your undergraduate cryptography class, you'd get a B.
red_admiral 3 hours ago [-]
It goes further back than crypto: back in the day, there was the design-by-committee OSI versus the "rough consensus and running code" IETF. The result is that while we still teach the OSI 7-layer model in universities, in practice we use TCP/IP, often with HTTP and TLS on top.
dboreham 3 hours ago [-]
TLS originated at Netscape.
sitkack 34 minutes ago [-]
So did Js, it isn’t a counter argument. Many of the gadflies are useful idiots.
zdragnar 6 hours ago [-]
You're basically describing politics, where specific opinions are central to a group identity. So long as there is an innate human desire to belong, there will be plenty of people who are happy to live with any amount of cognitive dissonance.
Joining a gadfly swarm just gives them an opportunity to prove their worth to the group.
Skip to 1 minute in and you’ll see the problem in experimental form.
jdougan 13 hours ago [-]
I'm curious as to how successful they were at subverting the IETF process. It wouldn't be impossible, but since much of the process is in the open it could be difficult, especially if they did it under their own name.
I suspect most of it was done under different corporate identities, and probaby just managed to slow adoption of systematic security architectures. Of course, once the Snowden papers came out, all that effort was rendered moot as the IETF reacted pretty hard.
gustavus 12 hours ago [-]
Ya ever heard of the OAuth2 protocol? I spent almost half a decade working on identity stuff, and spending a lot of time in OAuth land. OAuth is an overly complicated mess that has many many ways to go wrong and very few ways to go right.
If you told me the NSA/CIA had purposefully sabotaged the development of the OAuth2 protocol to make it so complex that no one can implement it securely it'd be the best explanation I've heard yet about why it is the monstrosity it is.
bawolff 12 hours ago [-]
> Ya ever heard of the OAuth2 protocol?
Have you ever seen SAML? Now there is a protocol that seems borderline sabotaged. CSRF tokens? Optional part of spec. Which part of the response is signed? Up to you with different implementations making different choices; but better verify the sig covers the relavent part of the doc. Can you change the signed part of spec in a way that alters the xml parse tree without it invalidating the signature? Of course you can!
Oauth2 is downright sane in comparison.
[To be clear, saml is not a ietf spec, it just solves a similar problem as oauth2]
johnmaguire 11 hours ago [-]
Honestly, as someone who has implemented both SAML and OAuth2 (+ OIDC) providers, I found SAML much easier to understand. Yes, there are dangerous exploits around XML. Yes, the spec is HUGE. But practically speaking, people only implement a few parts of it, and those parts were, IMHO, easier to understand and reason about.
This is definitely an unpopular opinion however.
tptacek 9 hours ago [-]
What's insidious about SAML is that it really is mostly straightforward to understand, but it's built on a foundation of sand, bone dust, and ash; it works --- mostly, modulo footguns like audience restrictions --- if you assume XML signature validation is reliable. But XML signature validation is deeply cursed, and is so complicated that most fielded SAML implementations are wrapping libxmlsec, a gnarly C codebase nobody reads.
johnmaguire 8 hours ago [-]
100% - XML vulnerabilities are the biggest issue. JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point. XML's complexity is inherent to the language.
thaumasiotes 7 hours ago [-]
> JWTs have also had their fair share, though I think they were mostly implementation bugs that have mostly been ironed out at this point.
The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
That could be described as an "implementation bug", but it can also be described as "not an implementation bug" - all your JWT functionality is working the way it's supposed to work, it's just not doing the thing that you hoped it would do.
amluto 7 hours ago [-]
IMO this is a defect (sabotage? plain incompetence?) in the specs, full stop.
RFC 7519 (the JWT spec) delegates all signature / authentication validation to the JWS and JWE specs. (And says that an unholy mechanism shall be used to determine whether to follow the JWS validation algorithm or the JWE algorithm.)
JWS even discusses algorithm verification (https://www.rfc-editor.org/rfc/rfc7515.html#section-10.6), but does not suggest, let alone require, the absolutely mindbendingly obvious way to do it: when you have a key used for verificaiton, the algorithm specification is part of the key. If I tell a library to verify that a given untrusted input is signed/authenticated by a key, the JWS design is:
bool is_it_valid(string message, string key); // where key is an HMAC secret or whatever
where VerificationKey is an algorithm and the key. If you say to verify with an HMAC-SHA256 key and the actual message was signed with none or with HMAC-SHA384 or anything else, it is invalid. If you have a database and you know your key bits but not what cryptosystem those bits belong to, your database is wrong.
The JWE spec is not obviously better. I wouldn't be utterly shocked if it could be attacked by sending a message to someone who intends to use, say, a specific Ed25519 public key, but setting the algorithm to RSA and treating the Ed25519 public key as a comically short RSA key.
bawolff 5 hours ago [-]
> The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
Sure that is pretty silly. However in saml you have xmlsec accepting the non standard extension where you can have it "signed" with hmac where the hmac key is specified inside the attacker controlled document. I would call that basically the same as alg=none, although at least its non-standard.
johnmaguire 7 hours ago [-]
I did say "mostly." :) I almost mentioned this bug, but it's so famous, I doubt any mainstream OIDC libraries fall prey to it these days.
psd1 37 minutes ago [-]
I'm a massive Aliens fan, and your word choice gave me a vivid picture of a vulnerability as a xenomorph egg leads to a swarm of nightmarish monsters invading the colony.
I don't know which species is worse. You don't see them ignoring CVEs for a percentage.
TrapLord_Rhodo 10 hours ago [-]
NSA?
ENGNR 12 hours ago [-]
Redirecting the user when they tap sign in from untrustednewsite.com, to a new window with the domain hidden of bigsitewithallyourdata.com and saying “Yeah, give us your login credentials” always felt like the craziest thing to me
So ripe for man in the middle attacks. Even if you just did a straight modal and said “put your google credentials into these fields”, we’re training people that that’s totally fine
djbusby 9 hours ago [-]
Like how Plaid for banking works.
11 hours ago [-]
11 hours ago [-]
tptacek 9 hours ago [-]
OAuth2 is about the simplest thing you could come up to solve the delegated authentication problem. It's complexity stems mostly from the environment it operates in: it's an authentication protocol that must run on top of unmodified oblivious browsers.
There's a lot of random additional complexity floating around it, but that complexity tracks changes in how applications are deployed, to mobile applications and standalone front-end browser applications that can't hold "app secrets".
The whole system is very convoluted, I'm not saying you're wrong to observe that, but I'd argue that's because apps have a convoluted history. The initial idea of the OAuth2 code flow is pretty simple.
barsonme 7 hours ago [-]
Sounds like you’re an NSA plant, Thomas :)
jdougan 12 hours ago [-]
The problem is, as anyone with sufficient experience knows, it is perfectly possible and common for devs to design security disasters without any involvement by the spooks. I suspect the NSA counts on this and uses any covert influence they might have to slow corrections (eg. Profiles for OAuth2 that actually are reasonable, patches to common services, etc.)
eddythompson80 12 hours ago [-]
sabotaging a design is remarkably easy. We have several individuals that almost do it effortlessly. It's almost a talent for some. I suspect that doing it maliciously while hiding behind some odd corner scenario or some compatibility requirements can't be that hard and will be almost impossible to prove or detect.
jdougan 11 hours ago [-]
On the other hand, why would the NSA even bother? With these talented individuals providing them a never ending supply of security issues? And they don't even have to pay for them besides having their normal 0-day discovery team look for them. I think supporting this is their reaction when a 0-day they are relying on gets patched out.
I can imagine special cases where they would exert resources and effort, such as the IETF or some other economically efficient chokepoint (Rust?), but not in general.
psd1 35 minutes ago [-]
As a senior professional in any field, you would, if you could, move from a reactive strategy to a proactive one.
10 hours ago [-]
willis936 12 hours ago [-]
Just because you're paranoid doesn't mean they're not after you.
esafak 12 hours ago [-]
Is there any write-up on the IETF reaction?
jdougan 12 hours ago [-]
There were a bunch at the time, a historical retrospective is "RFC 9446
Reflections on Ten Years Past the Snowden Revelations"
Fantastic and damming read. I had no idea they were so active and nefarious even in the 70s
tptacek 6 hours ago [-]
Are you sure you read it carefully? NSA's role in DES was to make it resistant to linear and differential cryptanalysis.
adrian_b 3 hours ago [-]
True, but they have also forced the reduction of the key size from 64 bits to 56 bits, making it much more vulnerable to brute force.
So they have hardened DES towards attacks from those with much less money than NSA, while making it weaker against rich organizations like NSA.
tetris11 3 hours ago [-]
They made it resistant to differential attacks yes, but weakened it for their own attack surface
EvanAnderson 11 hours ago [-]
I'm too young to have firsthand experience, but I've seen the speculation that IPSEC was an example of this kind of strategy. It's certainly more-- ahem-- flexible that it probably ought to be. I know there have been exploits against ISAKMP implementations. I'd assume the baroque nature of the protocol drove some of that vulnerability.
hinkley 12 hours ago [-]
Not IETF, but NIST, which I suspect is worse. Dual_EC_DRBG was withdrawn when it was discovered to be an attempt by the NSA to sabotage ECC specifications.
willis936 12 hours ago [-]
NIST EC DSA curves are the only ones used by CAs, are manipulatable, and have no explanation for their origin. Pretty much the entire HTTPS web is likely an open book to the NSA.
tptacek 9 hours ago [-]
I don't know what you mean by "manipulatable". I don't know what you mean by "DSA". I assume what you're doing is casting aspersions on the NIST P-curves. They're the most widely used curves on the Internet (hopefully not for too long).
I don't think all that many serious people believe there's anything malign about them. It's easy to dunk on them, because they're based on "random seeds" generated inside NIST or NSA. As a "nothing up our sleeves" measure, NSA generated these seeds and then passed them through SHA1, thus ostensibly destroying any structure in the seed before using it to generate coefficients for the curve.
The ostensible "backdoor attack" here is that NSA could have used its massive computation resources (bear in mind this happened in the 1990s) to conduct a search for seeds that hashed to some kind of insecure curve. The problem with this logic is that unless the secret curve weakness is very common, the attack is implausible. It's not that academic researchers automatically would have found it, but rather that NSA wouldn't be able to count on them not finding it.
Search for [koblitz menezes enigma curve] for a paper that goes into the history of this, and makes that argument (I just shoplifted it from the paper). If you don't know who Neal Koblitz and Alfred Menezes are, we're not speaking the same language anyways.
The real subtext to this "P-curves are corrupted" claim is that there are curves everyone drastically prefers, most especially Curve25519 (the second-most popular curve on the Internet). Modern curves have nice properties, like (mostly) not requiring point validation for security (any 32-byte string is a secure key, which is decidedly not the case for ordinary short Weierstrass curves, and being easy to implement without timing attacks.
The author of Curve25519 doesn't trust the NIST curves. That can mean something to you, but it's worth pointing out that author doesn't trust any other curves, either. Cryptographers have proposed "nothing up my sleeves" curves, whose coefficients are drawn from mathematical constants (like pi and e). Bernstein famously co-authored a paper that attempted to demonstrate that you could generate an "untrustworthy" curve by searching through permutations of NUMS parameters. It was fun stunt cryptography, but if you're looking for parsimony and peer consensus on these issues, you're probably better off with Menezes.
Incidentally, you said "ECDSA curve". ECDSA is an algorithm, not a curve. But nobody likes ECDSA, which was also designed by the NSA. A very similar situation plays out there --- Curve25519's author also invented Ed25519, a Schnorr-like signing scheme that resolves a variety of problems with ECDSA. Few people claim ECDSA is enemy action, though; we all just sort of understand that everyone had a lot to learn back in 1997.
mschuster91 12 hours ago [-]
Forged certificates would immediately show up in Certificate Transparency logs.
SV_BubbleTime 10 hours ago [-]
He isn’t saying the CA is bad. He is saying the curves selected are arbitrary and they stuck to specific ones with no reason. That the NSA has a backdoor to at least some TLS EC algorithms.
Now, IMO I thought the whole point was that the curve itself didn’t really matter. As you are just picking X Y points on it and doing some work from there. But if there is a flaw, and it required specific curves to work, well there you go.
jdougan 12 hours ago [-]
The NIST process (especially then) isn't fully open, which makes it easier to subvert with an inside agent.
charcircuit 10 hours ago [-]
There is no evidence that it was backdoored.
psd1 24 minutes ago [-]
Circumstantial evidence is still evidence. I'll acknowledge that this is extremely tenuous, but: the NSA has gimped algos before, wants to do it again as frequently as possible, and has the capacity to do so.
Unfortunately, at some point we (non-crypto-experts) have to trust something.
13 hours ago [-]
dmix 5 hours ago [-]
Article references Russias SORM system which provides not only FSB but the police and tax agencies with basically fully access to everything on the internet including credit card transactions, this stuff started in 1995 and was penetrated by the NSA
> Under SORM‑2, Russian Internet service providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, email messages and web use. The device must be installed at the ISP's expense.
originally there was a warrant system but it seemed quite liberal and they don’t bother with the secret court system “oversight” like the US:
> Since 2010, intelligence officers can wiretap someone's phones or monitor their Internet activity based on received reports that an individual is preparing to commit a crime. They do not have to back up those allegations with formal criminal charges against the suspect. According to a 2011 ruling, intelligence officers have the right to conduct surveillance of anyone who they claim is preparing to call for "extremist activity."
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
> Internet and telecom companies are required to disclose these communications and metadata, as well as "all other information necessary" to authorities on request and without a court order
> Equally troubling, the new counterterrorism law also requires Internet companies to provide to security authorities “information necessary for decoding” electronic messages if they encode messages or allow their users to employ “additional coding.” Since a substantial proportion of Internet traffic is “coded” in some form, this provision will affect a broad range of online activity.
SenAnder 56 minutes ago [-]
> Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
Looking beyond the EU, there are plenty of allegedly democratic countries on that wiki page with legally-required data retention:
In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. [..] It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant
justusw 10 hours ago [-]
So if governments are sniffing on high entropy traffic, could we just send normal seeming (SSH or whatever) packets with the payload coming from /dev/urandom? Would that be a denial of service?
NSA will archive it for you expecting to decrypt the content later in the future when a software bug is found or hardware is more capable.
janandonly 2 hours ago [-]
And see the national debt and tax burden rise even more.
Who is really winning if anybody does this?
agilob 2 hours ago [-]
Hard drive manufacturers I have stocks of?
lmm 8 hours ago [-]
If you could get enough people doing it, yes. But in practice the only people who would care enough would be the people the governments want to watch. Even a decent chunk of the crypto community would rather dunk on a cryptosystem they don't like than actually encrypt their emails (although of course how much of that is the NSA disrupting things is an open question).
elcritch 7 hours ago [-]
I actually sorta think unencrypted email has been a boon for society as both corporations and government agencies have left paper trails that helped expose their misdeeds later.
BlueTemplar 4 hours ago [-]
I thought there were wildly popular messaging apps now that were encrypted by default ?
__MatrixMan__ 9 hours ago [-]
When they find a way to "decrypt" /dev/urandom output into, you know, whatever is expedient for them at some future date, do you think a secret judge in a secret court is going to believe that you were just moving around random noise for the lulz?
ta988 7 hours ago [-]
Your random string at position 483828180 says IdIdIt
that's our proof right here
shmde 12 hours ago [-]
My conspiracy theory is that AES 256 has been cracked by NSA/CIA but they just shut up about it so everyone feels safe.
sandworm101 11 hours ago [-]
If AES is cracked by the NSA then they know that there is a fault. They must therefore assume that others could know about it and might exploit that fault, either now or in the future. A massive amount of American infrastructure, including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
mike_d 6 hours ago [-]
> including intelligence services, relies upon AES not having such holes. They wouldn't sit on such information.
The NSA uses a completely different set of algorithms called Suite A. They don't use AES for exactly this reason.
The_Colonel 3 hours ago [-]
> Suite A will be used for the protection of some categories of especially sensitive information (a small percentage of the overall national security-related information assurance market)
The government is known to horde and deploy zero days for surveillance... That was part of the snowden leak. google/Ask chatgpt about Tailored Access Operations revealed under the snowden papers.
4bpp 10 hours ago [-]
If the flaw in AES is subtle enough, they may be (justifiedly?) convinced that nobody else's capabilities will suffice to discover and/or exploit it - or, at least, that they have enough of a grasp of what everyone else is doing that no other actor could discover the flaw without the NSA finding out about this.
c0pium 8 hours ago [-]
Encryption bugs are fundamentally different than other exploits; if I send traffic protected by encryption then it needs to remain encrypted as long as the information needs to be secret. With national security that can be decades.
jongjong 9 hours ago [-]
It seems reasonable that they would set a self-imposed expiry date on exploits then get them patched quietly.
8 hours ago [-]
AnimalMuppet 10 hours ago [-]
They wouldn't go public with such information. They would very quietly get the most important parts moved off of it, quickly, with as little noise as possible.
sandworm101 10 hours ago [-]
Moving any part of the US government off of AES would be noticed. The NSA doesn't send IT people to do installs. They set standards which are then incorporated into products by outside contractors selling to government agencies. Any attempted migration away from AES would be news here on HN within hours.
That’s not the point: if there’s a grave flaw in AES, other countries can find it too.
lmm 8 hours ago [-]
If they can find a flaw in AES 256, others can too.
My theory is they've got a backdoor in the iPhone hardware random number generator. It's the most obvious high-value target, it's inherently almost undetectable (the output of a CSPRNG is indistinguishable from real random numbers), and you can keep crypto researchers busy with fights about whose cryptosystem is best, safe in the knowledge that it doesn't matter what they come up with, they were owned from the start.
bmc7505 5 hours ago [-]
They would be foolish not to at least try, either by intercepting entropy-supplying syscalls or compromising the hardware directly. RNG attacks are easy to design and nearly impossible to detect.
Try to implement out-of-spec authentication and watch your logs fill up with bots.
sunk1st 11 hours ago [-]
What’s the implication? Sorry. I can see it going either way.
older 12 hours ago [-]
[flagged]
xhoptre 13 hours ago [-]
[flagged]
drunner 13 hours ago [-]
What an awful take.
umeshunni 9 hours ago [-]
what was the take?
lcnPylGDnU4H9OF 9 hours ago [-]
There is an option in your profile called “showdead”. You should be able to read the comment after updating that option to be affirmative.
ementally 8 hours ago [-]
Should have just kept it off.
lcnPylGDnU4H9OF 7 hours ago [-]
I love reading the controversial things. Admittedly, it’s very commonly low quality like OP in this thread but sometimes you get good if unpopular arguments. The challenge to my worldview is a utility that’s difficult to replace.
Krasnol 13 hours ago [-]
This is a genuine question, I am curious as to what drives men such as you to such comments.
didntcheck 2 hours ago [-]
It doesn't take much outlandish theorizing to wonder who might be interested in character assassination of people challenging surveillance
azinman2 13 hours ago [-]
I doubt it’s connected but would be fascinating if true.
johnnyworker 13 hours ago [-]
Because they're so bad it would need a global system of total surveillance to catch them? Sure.
You can't connect real things like these documents with slander by people who do nothing to step on the toes of the NSA. That is all that the BS about Assange or Appelbaum being a sex menace or Snowden being a Russian asset is. "Oh noes, they're a threat to the work we're not doing". Nobody is asking you to get drinks with Assange or Appelbaum. They don't want to be your friend. It's okay if you don't like them, for whatever personal reasons (and choosing to fall for this crap falls under personal reasons). It's not okay to be part of a mob that murders people by throwing a pebble each with this plausible deniability, in this "genuinely curious" just wondering kind of way. Enough is enough.
It certainly isn't fascinating. 3 letter agencies are torturing and murdering people, and having nothing better to do than gossip about gossip about messengers is just vulgar, boring, infantile cowardice, puffed up with not even clever words.
13 hours ago [-]
ftyers 11 hours ago [-]
Wow djb was on his committee, cool.
tptacek 9 hours ago [-]
DJB is widely thought to be the entire reason he was in the program at all.
There's long been stories about meddling in other standards orgs (both to strengthen and to weaken them), but I don't recall hearing rumors about sabotage of IETF standards.
Almost immediately after that happened, a well-known cypherpunk person accused the IETF IPSEC standardization process of subversion, pointing out Phil Rogaway (an extraordinarily well-respected and influential academic cryptographer) trying in vain to get the IETF not to standardize a chained-IV CBC transport (this is a bug class now best known, in TLS, as BEAST), while a chorus of IETF gadflies dunked on him and questioned his credentials. The gadflies ultimately prevailed.
The moral of this story, and what makes these "NSA at IETF" allegations so insidious, is that the IETF is perfectly capable of subverting its own cryptography without any help from a meddling intelligence agency. This is a common failure of all standards organizations (W3C didn't need any help coming up with XML DSIG, which is probably the worst cryptosystem ever devised), but it's somewhat amplified in open settings like IETF.
The 3 letter agencies usually recrut people from academia and sensitive organizations in order to pursue their agenda.
We ended up with a huge mess that solves no actual problems, introduced problems that never existed before, and so confusing in what it does that people still blindly defend it.
Surely this can't be a phenomenon unique to technology, can it?
I think --- I am not kidding here, I believe this sincerely --- the correct conclusion is community-driven cryptographic standards organizations are a force for evil. Curve25519 is a good standard. So is Noise, so is WireGuard. None of those were developed in a standards organization. It's hard to think of a good cryptosystem that was. TLS? It took decades --- bad decades --- to get to 1.3. Of that outcome, I believe it was Watson Ladd who said that if you turned it in as the "secure transport" homework in your undergraduate cryptography class, you'd get a B.
Joining a gadfly swarm just gives them an opportunity to prove their worth to the group.
Skip to 1 minute in and you’ll see the problem in experimental form.
I suspect most of it was done under different corporate identities, and probaby just managed to slow adoption of systematic security architectures. Of course, once the Snowden papers came out, all that effort was rendered moot as the IETF reacted pretty hard.
If you told me the NSA/CIA had purposefully sabotaged the development of the OAuth2 protocol to make it so complex that no one can implement it securely it'd be the best explanation I've heard yet about why it is the monstrosity it is.
Have you ever seen SAML? Now there is a protocol that seems borderline sabotaged. CSRF tokens? Optional part of spec. Which part of the response is signed? Up to you with different implementations making different choices; but better verify the sig covers the relavent part of the doc. Can you change the signed part of spec in a way that alters the xml parse tree without it invalidating the signature? Of course you can!
Oauth2 is downright sane in comparison.
[To be clear, saml is not a ietf spec, it just solves a similar problem as oauth2]
This is definitely an unpopular opinion however.
The most famous JWT issue, to my mind, was people implementing JWT and -- as per spec -- accepting an encryption mode of "none" as valid.
That could be described as an "implementation bug", but it can also be described as "not an implementation bug" - all your JWT functionality is working the way it's supposed to work, it's just not doing the thing that you hoped it would do.
RFC 7519 (the JWT spec) delegates all signature / authentication validation to the JWS and JWE specs. (And says that an unholy mechanism shall be used to determine whether to follow the JWS validation algorithm or the JWE algorithm.)
JWS even discusses algorithm verification (https://www.rfc-editor.org/rfc/rfc7515.html#section-10.6), but does not suggest, let alone require, the absolutely mindbendingly obvious way to do it: when you have a key used for verificaiton, the algorithm specification is part of the key. If I tell a library to verify that a given untrusted input is signed/authenticated by a key, the JWS design is:
and this is wrong. You do it like this: where VerificationKey is an algorithm and the key. If you say to verify with an HMAC-SHA256 key and the actual message was signed with none or with HMAC-SHA384 or anything else, it is invalid. If you have a database and you know your key bits but not what cryptosystem those bits belong to, your database is wrong.The JWE spec is not obviously better. I wouldn't be utterly shocked if it could be attacked by sending a message to someone who intends to use, say, a specific Ed25519 public key, but setting the algorithm to RSA and treating the Ed25519 public key as a comically short RSA key.
Sure that is pretty silly. However in saml you have xmlsec accepting the non standard extension where you can have it "signed" with hmac where the hmac key is specified inside the attacker controlled document. I would call that basically the same as alg=none, although at least its non-standard.
I don't know which species is worse. You don't see them ignoring CVEs for a percentage.
So ripe for man in the middle attacks. Even if you just did a straight modal and said “put your google credentials into these fields”, we’re training people that that’s totally fine
There's a lot of random additional complexity floating around it, but that complexity tracks changes in how applications are deployed, to mobile applications and standalone front-end browser applications that can't hold "app secrets".
The whole system is very convoluted, I'm not saying you're wrong to observe that, but I'd argue that's because apps have a convoluted history. The initial idea of the OAuth2 code flow is pretty simple.
I can imagine special cases where they would exert resources and effort, such as the IETF or some other economically efficient chokepoint (Rust?), but not in general.
https://www.rfc-editor.org/rfc/rfc9446.html
So they have hardened DES towards attacks from those with much less money than NSA, while making it weaker against rich organizations like NSA.
I don't think all that many serious people believe there's anything malign about them. It's easy to dunk on them, because they're based on "random seeds" generated inside NIST or NSA. As a "nothing up our sleeves" measure, NSA generated these seeds and then passed them through SHA1, thus ostensibly destroying any structure in the seed before using it to generate coefficients for the curve.
The ostensible "backdoor attack" here is that NSA could have used its massive computation resources (bear in mind this happened in the 1990s) to conduct a search for seeds that hashed to some kind of insecure curve. The problem with this logic is that unless the secret curve weakness is very common, the attack is implausible. It's not that academic researchers automatically would have found it, but rather that NSA wouldn't be able to count on them not finding it.
Search for [koblitz menezes enigma curve] for a paper that goes into the history of this, and makes that argument (I just shoplifted it from the paper). If you don't know who Neal Koblitz and Alfred Menezes are, we're not speaking the same language anyways.
The real subtext to this "P-curves are corrupted" claim is that there are curves everyone drastically prefers, most especially Curve25519 (the second-most popular curve on the Internet). Modern curves have nice properties, like (mostly) not requiring point validation for security (any 32-byte string is a secure key, which is decidedly not the case for ordinary short Weierstrass curves, and being easy to implement without timing attacks.
The author of Curve25519 doesn't trust the NIST curves. That can mean something to you, but it's worth pointing out that author doesn't trust any other curves, either. Cryptographers have proposed "nothing up my sleeves" curves, whose coefficients are drawn from mathematical constants (like pi and e). Bernstein famously co-authored a paper that attempted to demonstrate that you could generate an "untrustworthy" curve by searching through permutations of NUMS parameters. It was fun stunt cryptography, but if you're looking for parsimony and peer consensus on these issues, you're probably better off with Menezes.
Incidentally, you said "ECDSA curve". ECDSA is an algorithm, not a curve. But nobody likes ECDSA, which was also designed by the NSA. A very similar situation plays out there --- Curve25519's author also invented Ed25519, a Schnorr-like signing scheme that resolves a variety of problems with ECDSA. Few people claim ECDSA is enemy action, though; we all just sort of understand that everyone had a lot to learn back in 1997.
Now, IMO I thought the whole point was that the curve itself didn’t really matter. As you are just picking X Y points on it and doing some work from there. But if there is a flaw, and it required specific curves to work, well there you go.
Unfortunately, at some point we (non-crypto-experts) have to trust something.
> Under SORM‑2, Russian Internet service providers (ISPs) must install a special device on their servers to allow the FSB to track all credit card transactions, email messages and web use. The device must be installed at the ISP's expense.
originally there was a warrant system but it seemed quite liberal and they don’t bother with the secret court system “oversight” like the US:
> Since 2010, intelligence officers can wiretap someone's phones or monitor their Internet activity based on received reports that an individual is preparing to commit a crime. They do not have to back up those allegations with formal criminal charges against the suspect. According to a 2011 ruling, intelligence officers have the right to conduct surveillance of anyone who they claim is preparing to call for "extremist activity."
https://en.wikipedia.org/wiki/SORM?wprov=sfti1
Then in 2016 a counter terrorism law was passed and it sounds like they ISPs/telecoms are required to store everything for 6 months and it merely has to be requested by “authorities” (guessing beyond just the FSB) without a court order
> Internet and telecom companies are required to disclose these communications and metadata, as well as "all other information necessary" to authorities on request and without a court order
https://en.wikipedia.org/wiki/Yarovaya_law?wprov=sfti1
> Equally troubling, the new counterterrorism law also requires Internet companies to provide to security authorities “information necessary for decoding” electronic messages if they encode messages or allow their users to employ “additional coding.” Since a substantial proportion of Internet traffic is “coded” in some form, this provision will affect a broad range of online activity.
Russia is not alone. The EU required the same thing from 2006, until the EU Court struck it down in 2014: https://en.wikipedia.org/wiki/Data_Retention_Directive
https://en.wikipedia.org/wiki/Data_retention#European_Union is a fun read in how the EU fined countries for adhering to their respective national constitutions and refusing to implement that directive.
Looking beyond the EU, there are plenty of allegedly democratic countries on that wiki page with legally-required data retention:
In 2015, the Australian government introduced mandatory data retention laws that allows data to be retained up to two years. [..] It requires telecommunication providers and ISPs to retain telephony, Internet and email metadata for two years, accessible without a warrant
dd if=/dev/urandom of=encrypted1.zip bs=4M count=1
and upload it to dropbox and google drive
NSA will archive it for you expecting to decrypt the content later in the future when a software bug is found or hardware is more capable.
Who is really winning if anybody does this?
The NSA uses a completely different set of algorithms called Suite A. They don't use AES for exactly this reason.
From https://en.wikipedia.org/wiki/NSA_Suite_A_Cryptography
My theory is they've got a backdoor in the iPhone hardware random number generator. It's the most obvious high-value target, it's inherently almost undetectable (the output of a CSPRNG is indistinguishable from real random numbers), and you can keep crypto researchers busy with fights about whose cryptosystem is best, safe in the knowledge that it doesn't matter what they come up with, they were owned from the start.
https://en.wikipedia.org/wiki/Yao%27s_test
https://news.ycombinator.com/item?id=11872642
^ that is what that is. Or, in more detail: https://github.com/Enegnei/JacobAppelbaumLeavesTor/blob/mast...
You can't connect real things like these documents with slander by people who do nothing to step on the toes of the NSA. That is all that the BS about Assange or Appelbaum being a sex menace or Snowden being a Russian asset is. "Oh noes, they're a threat to the work we're not doing". Nobody is asking you to get drinks with Assange or Appelbaum. They don't want to be your friend. It's okay if you don't like them, for whatever personal reasons (and choosing to fall for this crap falls under personal reasons). It's not okay to be part of a mob that murders people by throwing a pebble each with this plausible deniability, in this "genuinely curious" just wondering kind of way. Enough is enough.
It certainly isn't fascinating. 3 letter agencies are torturing and murdering people, and having nothing better to do than gossip about gossip about messengers is just vulgar, boring, infantile cowardice, puffed up with not even clever words.
I didn’t know who this was, others probably too.