5331 private links
Computer Security and the Internet: Tools and Jewels
by Paul C. van Oorschot. 2020, Springer. 365 pages plus frontmatter.
ISBN: 978-3-030-33648-6 (hardcopy), 978-3-030-33649-3 (eBook)
The (chapter PDF) book copy on this site is a self-archived author-created version for personal use.
Reposting and all other forms of redistribution are strictly prohibited.
Copyright (c)2020 Paul C. van Oorschot. Under publishing license to Springer.
General Packet Radio Service (GPRS) is a mobile data standard that was widely used in the early 2000s. The first encryption algorithm for that standard was GEA-1, a stream cipher built on three linear-feedback shift registers and a non-linear combining function. Although the algorithm has a 64-bit key, the effective key length is only 40 bits, due to “an exceptional interaction of the deployed LFSRs and the key initialization, which is highly unlikely to occur by chance.”
GEA-1 was designed by the European Telecommunications Standards Institute in 1998. ETSI was — and maybe still is — under the auspices of SOGIS: the Senior Officials Group, Information Systems Security. That’s basically the intelligence agencies of the EU countries.
Details are in the paper: “Cryptanalysis of the GPRS Encryption Algorithms GEA-1 and GEA-2.” GEA-2 does not have the same flaw, although the researchers found a practical attack with enough keystream.
In this paper we review the principles of Zero Trust security, and the aspects of IoT that make proactive application of Zero Trust to IoT different than its application to the workforce. The key capabilities of Zero Trust for IoT are defined for companies with an IoT strategy, and next steps highlight Microsoft solutions enabling your journey of Zero Trust for IoT.
As organizations increasingly rely on automated systems for core business processes, the importance of improving the security posture of IoT is becoming business-critical. The Zero Trust model based on the principles of “never trust” and “always verify” can be applied to IoT to improve security posture.
In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin — and everything about it.
Much has been written about blockchains and how they displace, reshape, or eliminate trust. But when you analyze both blockchain and trust, you quickly realize that there is much more hype than value. Blockchain solutions are often much worse than what they replace.
First, a caveat. By blockchain, I mean something very specific: the data structures and protocols that make up a public blockchain. These have three essential elements. The first is a distributed (as in multiple copies) but centralized (as in there’s only one) ledger, which is a way of recording what happened and in what order. This ledger is public, meaning that anyone can read it, and immutable, meaning that no one can change what happened in the past.
The second element is the consensus algorithm, which is a way to ensure all the copies of the ledger are the same. This is generally called mining; a critical part of the system is that anyone can participate. It is also distributed, meaning that you don’t have to trust any particular node in the consensus network. It can also be extremely expensive, both in data storage and in the energy required to maintain it. Bitcoin has the most expensive consensus algorithm the world has ever seen, by far.
Finally, the third element is the currency. This is some sort of digital token that has value and is publicly traded. Currency is a necessary element of a blockchain to align the incentives of everyone involved. Transactions involving these tokens are stored on the ledger.
Private blockchains are completely uninteresting. (By this, I mean systems that use the blockchain data structure but don’t have the above three elements.) In general, they have some external limitation on who can interact with the blockchain and its features. These are not anything new; they’re distributed append-only data structures with a list of individuals authorized to add to it. Consensus protocols have been studied in distributed systems for more than 60 years. Append-only data structures have been similarly well covered. They’re blockchains in name only, and — as far as I can tell — the only reason to operate one is to ride on the blockchain hype.
All three elements of a public blockchain fit together as a single network that offers new security properties. The question is: Is it actually good for anything? It’s all a matter of trust. //
In 2012, I wrote a book about trust and security, Liars and Outliers. In it, I listed four very general systems our species uses to incentivize trustworthy behavior. The first two are morals and reputation. The problem is that they scale only to a certain population size. Primitive systems were good enough for small communities, but larger communities required delegation, and more formalism.
The third is institutions. Institutions have rules and laws that induce people to behave according to the group norm, imposing sanctions on those who do not. In a sense, laws formalize reputation. Finally, the fourth is security systems. These are the wide varieties of security technologies we employ: door locks and tall fences, alarm systems and guards, forensics and audit systems, and so on.
These four elements work together to enable trust. Take banking, for example. Financial institutions, merchants, and individuals are all concerned with their reputations, which prevents theft and fraud. The laws and regulations surrounding every aspect of banking keep everyone in line, including backstops that limit risks in the case of fraud. And there are lots of security systems in place, from anti-counterfeiting technologies to internet-security technologies. //
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.
When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?
Blockchain enthusiasts point to more traditional forms of trust — bank processing fees, for example — as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste. //
To the extent that people don’t use bitcoin, it’s because they don’t trust bitcoin. That has nothing to do with the cryptography or the protocols. In fact, a system where you can lose your life savings if you forget your key or download a piece of malware is not particularly trustworthy. No amount of explaining how SHA-256 works to prevent double-spending will fix that. //
Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain — of course, then they wouldn’t have the cool name. //
Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.
To answer the question of whether the blockchain is needed, ask yourself: Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?
Messages were routed to an FBI-owned server and decrypted with master key. //
The Federal Bureau of Investigation created a company that sold encrypted devices to hundreds of organized crime syndicates, resulting in 800 arrests in 16 countries, law-enforcement authorities announced today. The FBI and agencies in other countries intercepted 27 million messages over 18 months before making the arrests in recent days, and more arrests are planned.
The FBI teamed up with Australian Federal Police to target drug trafficking and money laundering. They "strategically developed and covertly operated an encrypted device company, called ANOM, which grew to service more than 12,000 encrypted devices to over 300 criminal syndicates operating in more than 100 countries, including Italian organized crime, outlaw motorcycle gangs, and international drug trafficking organizations," Europol said today. //
"For years, organized crime figures around the globe relied on the devices to orchestrate international drug shipments, coordinate the trafficking of arms and explosives, and discuss contract killings, law enforcement officials said," the Times wrote. "Users trusted the devices' security so much that they often laid out their plans not in code, but in plain language."
Unbeknownst to users, messages were routed to an FBI-owned server and decrypted with a master key controlled by the FBI. //
The operation was given the names "Trojan Shield" and "Greenlight." Europol called it "one of the largest and most sophisticated law enforcement operations to date in the fight against encrypted criminal activities." //
The FBI has complained about encryption in consumer products for years, with one FBI official in 2018 reportedly calling Apple "jerks." Today's announcement demonstrates again that law enforcement has the ability to target criminals' use of encrypted communications without making mass-market devices less secure.
Generate and verify the MD5/SHA1 checksum of a file without uploading it.
Generate the hash of the string you input.
Even when you pay for a decryption key, your files may still be locked up by another strain of malware.
It seems that some higher powers in government think encryption is only used for nefarious purposes. //
Here, though, as my colleague Asha Barbaschow reported, are the public thoughts of the commission: If you use encryption, you're likely a crook. Which may surprise one or two iMessage and WhatsApp users. //
The commission's actual words about encrypted communication services were: "These platforms are used almost exclusively by SOC [serious and organised crime] groups and are developed specifically to obscure the identities of the involved criminal entities and enable avoidance of detection by law enforcement."
I do understand that there are many bad people in the world. I fear I have done business with some. A few may have even become my friends for a short while. //
But to suggest -- with a straight face and a public voice -- that encryption is almost exclusive to the evil seems like the sort of exaggeration that only a politician would embrace. Publicly.
The Australian Criminal Intelligence Commission has said an encrypted communication platform is not something a law-abiding member of the community would use.
If you're a web developer, you've probably had to make a user account system. The most important aspect of a user account system is how user passwords are protected. User account databases are hacked frequently, so you absolutely must do something to protect your users' passwords if your website is ever breached. The best way to protect passwords is to employ salted password hashing. This page will explain why it's done the way it is.
There are a lot of conflicting ideas and misconceptions on how to do password hashing properly, probably due to the abundance of misinformation on the web. Password hashing is one of those things that's so simple, but yet so many people get wrong. With this page, I hope to explain not only the correct way to do it, but why it should be done that way. //
To make it impossible for an attacker to create a lookup table for every possible salt, the salt must be long. A good rule of thumb is to use a salt that is the same size as the output of the hash function. For example, the output of SHA256 is 256 bits (32 bytes), so the salt should be at least 32 random bytes.
In 2016, Juniper removed the backdoored Dual_EC DRBG algorithm, impacting its ScreenOS operating system. NIST also withdrew the algorithm, citing security concern.
Juniper’s use of Dual_EC dates to 2008, at least a year after Dan Shumow and Neils Ferguson’s landmark presentation at the CRYPTO conference, which first cast suspicion on Dual_EC being backdoored by the NSA.
To many, Juniper’s move to remove Dual_EC (and also ANSI X9.31 PRNG) confirmed the widely held belief the vulnerabilities were tied to operations by the NSA described in the 2013 article published by the German publication Der Spiegel. That article described the existence of a catalog of hardware and software tools used by the NSA to infiltrate equipment manufactured by Juniper, Cisco and Huawei. The story was based on leaked 2013 document by former contractor Edward Snowden.
Calls for encryption backdoors date back to the 1990s and the so-called Crypto Wars. That’s when President Bill Clinton’s administration insisted that U.S. government have a way to break the encryption that was exported outside of the United States.
After tech behemoths like Twitter moved to ban Trump and thousands of other far-right accounts, millions moved to apps like Signal and Telegram for their encrypted messaging services.
There's one rub, though: Telegram, unlike Signal, doesn't have end-to-end encryption by default.
End-to-end encryption means that only the message sender and receiver can read the message. Even the server that hosts it, such as Signal or iMessage on Apple devices, can't decrypt and read what someone wrote. If those servers were ever hacked, hackers wouldn't be able to read the messages, either. It's safe to say, then, that end-to-end (e2e) encryption is an imperative element to secure messaging.
When comparing Signal vs Telegram, the Slant community recommends Signal for most people. In the question“What is the best team chat software?” Signal is ranked 2nd while Telegram is ranked 7th. The most important reason people chose Signal is:
Signal uses an advanced end to end encryption protocol that provides privacy for every message every time.
One countermeasure that can partially mitigate the attack is for service providers that offer key-based 2FA to use a feature baked into the U2F standard that counts the number of interactions a key has had with the provider’s servers. If a key reports a number that doesn’t match what’s stored on the server, the provider will have good reason to believe the key is a clone. A Google spokeswoman said the company has this feature.
For storing rarely used secrets that should not be kept on a networked computer, it is convenient to print them on paper. However, ordinary barcodes can store not much more than 2000 octets of data, and in practice even such small amounts cannot be reliably read by widely used software (e.g. ZXing).
In this note I show a script for splitting small amounts of data across multiple barcodes and generating a printable document. Specifically, this script is limited to less than 7650 alphanumeric characters, such as from the Base-64 alphabet. It can be used for archiving Tarsnap keys, GPG keys, SSH keys, etc.
The script is implemented in Python, since this is one of the most widespread interpreters, is compatible with both Python 2 and Python 3, and has one external dependency, the iec16022 binary. On Debian-based systems these can be installed using
apt-get install python3 iec16022
The script accepts any ASCII sequence and generates an HTML page sized adequately for printing on A5 paper that contains multiple ISO/IEC 16022 (Data Matrix) barcodes. The barcodes can be read with any off-the-shelf software, e.g. ZXing. Even if up to 30% of the barcode area is corrupted, the data can still be recovered. //
This page can now be printed on a laser printer (with no margins) and laminated. If done properly, it is likely to outlast the service for which it holds the secrets.
ABC reported that Blake worked on decoding the message using a University of Melbourne supercomputer called Spartan.
“During the year we tested, by trial and error, around 650,000 different reading directions through the cipher. This search turned up — more or less — nothing,” Blake said. “However, one of these searches uncovered a surprising combination of words: GAS CHAMBER. That such a macabre phase should pop up in a sea of noise warranted further attention.
“From this fragment, David, Jarl van Eycke and I reworked the key and corrected an error Zodiac made in his diagonal enumeration of the second vertical segment of the cipher.
“Jarl’s fantastic program, azdecrypt, was essential in this process,” Black said.
The San Francisco office of the FBI on Friday confirmed the group had cracked the coded message, and said the investigation into the half-century-old case was ongoing.
“The FBI is aware that a cipher attributed to the Zodiac Killer was recently solved by private citizens,” the FBI said in a statement posted on Twitter. “The Zodiac Killer terrorised multiple communities across Northern California, and even though decades have gone by, we continue to seek justice for the victims of these brutal crimes.”
No one was ever charged in the Zodiac case, and theories abound as to the killer’s real identity.
I am asking this because WhatsApp says it is end-to-end encrypted.
-
Are there any problems with sending a public key through WhatsApp?
-
There might be some objections to sending symmetric and private keys.
Under what circumstances can I send symmetric and private keys?
E2EE doesn't protect data at rest. Unlike Signal, WhatsApp doesn't encrypt internal message database. A forensic analysis can recover deleted messages in plain text if the lock screen password is known. WhatsApp daily chat backup encrypts message database with AES-GCM-256 key which is known to WhatsApp service (see How can WhatsApp restore local or Google Drive Backups?). Although, the chat backup is not possessed by WhatsApp service but Google Drive does if Google Drive backup is enabled. There you have no control of how it is used by state surveillance.
Apps with accessibility permission can see the content on the screen.
Sending passwords through Signal is somewhat safer if you implicitly trust the security of the device. Signal encrypts the message database with database encryption key which is itself encrypted with a key stored in hardware backed keystore (android 7+). That leaves deleted messages unreadable from forensic recovery even if the lockscreen password is known.
Private keys shouldn't be sent in any cases. It shouldn't be even available to you for sharing.
I found the creator of Rainbow Table's paper, aimed at cryptanalysts, was pretty inaccessible considering the simplicity and elegance of Rainbow Tables, so this is an overview of it for a layman.
Hash functions map plaintext to hashes so that you can't tell a plaintext from its hash.
If you want to find a given plaintext for a certain hash there are two simple methods:
- Hash each plaintext one by one, until you find the hash.
- Hash each plaintext one by one, but store each generated hash in a sorted table so that you can easily look the hash up later without generating the hashes again
Going one by one takes a very long time, and storing each hash takes an amount of memory which simply doesn't exist (for all but the smallest of plaintext sets). Rainbow tables are a compromise between pre-computation and low memory usage.
Mark Jaycox has written a long article on the US Executive Order 12333: “No Oversight, No Limits, No Worries: A Primer on Presidential Spying and Executive Order 12,333“:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3486701
Abstract: Executive Order 12,333 (“EO 12333”) is a 1980s Executive Order signed by President Ronald Reagan that, among other things, establishes an overarching policy framework for the Executive Branch’s spying powers. Although electronic surveillance programs authorized by EO 12333 generally target foreign intelligence from foreign targets, its permissive targeting standards allow for the substantial collection of Americans’ communications containing little to no foreign intelligence value. This fact alone necessitates closer inspection.
This working draft conducts such an inspection by collecting and coalescing the various declassifications, disclosures, legislative investigations, and news reports concerning EO 12333 electronic surveillance programs in order to provide a better understanding of how the Executive Branch implements the order and the surveillance programs it authorizes. The Article pays particular attention to EO 12333’s designation of the National Security Agency as primarily responsible for conducting signals intelligence, which includes the installation of malware, the analysis of internet traffic traversing the telecommunications backbone, the hacking of U.S.-based companies like Yahoo and Google, and the analysis of Americans’ communications, contact lists, text messages, geolocation data, and other information.
After exploring the electronic surveillance programs authorized by EO 12333, this Article proposes reforms to the existing policy framework, including narrowing the aperture of authorized surveillance, increasing privacy standards for the retention of data, and requiring greater transparency and accountability.
Jones • September 28, 2020 8:04 AM
There’s a great New York Times article on the NSA from 1983 that details how the agency was created by executive order and how Congress has never passed any law limiting its power or clarifying its scope:
https://www.nytimes.com/1983/03/27/magazine/the-silent-power-of-the-nsa.html
There’s another report by the Brennan Center called “What Went Wrong With the FISA Court?” that details the creation of FISA after the Church Committee hearings, and how post-911, FISA has been amended to require the types of activities that FISA was created to prevent:
https://www.brennancenter.org/our-work/research-reports/what-went-wrong-fisa-court
Both documents are important for understanding what EO 12333 means in practice today.
Cody • September 28, 2020 10:00 AM
Reagan’s EO 12333 replaced Gerald Ford’s EO 11905.
Ford’s 1976 EO was a reluctant response to the shocking revelations of the 1975 Senate ‘Church Committee’, which uncovered widespread illegal domestic spying activity (& illicit forein interventions) by Federal agencies including USArmy, IRS, CIA, NSA. Most of the Church findings were kept classified from the American public.
Church did make public the discovery of “Operation SHAMROCK”, in which the major US telecommunications companies shared all their traffic with the NSA from 1945 to the early 1970s.
Compromised reveals that the FBI found that during at least some of the time the illegals were under investigation, the Russian numbers intended for them were sent not by a transmitter in Russia (which might have difficulty being reliably received in the US), but relayed by the Cuban shortwave numbers station. This is perhaps a bit surprising, since the period in question (2000-2010) was well after the Soviet Union, the historic protector of Cuba's government, had ceased to exist.
The Cuban numbers station is somewhat legendary. It is a powerful station, operated by Cuba's intelligence directorate but co-located with Radio Habana's transmitters near Bauta, Cuba, and is easily received with even very modest equipment throughout the US. While its numbers transmissions have taken a variety of forms over the years, during the early 2000's it operated around the clock, transmitting in both voice and morse code. The station was (and remains) so powerful and widely heard that radio hobbyists quickly derived its hourly schedule. During this period, each scheduled hourly transmission consisted of a preamble followed by three messages, each made up entirely of a series of five digit groups (with by a brief period of silence separating the three messages). The three hourly messages would take a total of about 45 minutes, in either voice or morse code depending on the scheduled time and frequency. Every hour, the same thing, predictably right on schedule (with fill traffic presumably substituted for the slots during which there was no actual message).
If you want to hear what this sounded like, here's a recording I made on October 4, 2008 of one of the hourly voice transmissions, as received (static and all) in my Philadelphia apartment: www.mattblaze.org/private/17435khz-200810041700.mp3. The transmission follows the standard Cuban numbers format of the time, starting with an "Atenćion" preamble listing three five-digit identifiers for the three messages that follow, and ending with "Final, Final". In this recording, the first of the three messages (64202) starts at 3:00, the second (65852) at 16:00, and the third (86321) at 29:00, with the "Final" signoff at the end. The transmissions are, to my cryptographic ear at least, both profoundly dull and yet also eerily riveting.
And this is where the mystery I've been wondering about comes in. In 2007, I noticed an odd anomaly: some messages completely lacked the digit 9 ("nueve"). Most messages had, as they always did and as you'd expect with OTP ciphertext, a uniform distribution of the digits 0-9. But other messages, at random times, suddenly had no 9s at all. I wasn't the only (or the first) person to notice this; apparently the 9s started disappearing from messages some time around 2005.
This is, to say the least, very odd. The way OTPs work should produce a uniform distribution of all ten digits in the ciphertext. The odds of an entire message lacking 9s (or any other digit) are infinitesimal. And yet such messages were plainly being transmitted, and fairly often at that. In fact, in the recording of the 2008 transmission linked to above, you will notice that while the second and third messages use all ten digits, the first is completely devoid of 9s.
I remember concluding that the most likely, if still rather improbable, explanation was that the 9-less messages were dummy fill traffic and that the random number generator used to create the messages had a bug or developed a defect that prevented 9s from being included. This would be, to say the least, a very serious error, since it would allow a listener to easily distinguish fill traffic from real traffic, completely negating the benefit of having fill traffic in the first place. It would open the door to exactly the kind of traffic analysis that the system was carefully engineered to thwart. The 9-less messages went on for almost ten years. (If I were reporting this as an Internet vulnerability, I would dub it the "Nein Nines" attack; please forgive the linguistic muddle). But I was resigned to the likelihood that I would never know for sure.
And this brings us to the second observation from Strzok's book.
Compromised doesn't say anything about missing nueves, but he does mention that the FBI exploited a serious tradecraft error on the part of the sender: the FBI was able to tell when messages were and weren't being sent during the weekly timeslot when the suspect couple was observed in the room where they copied traffic. Even worse (for the illegals), empty message slots correlated perfectly with times that the suspect couple was traveling and not able to copy messages. This observation helped confirm the FBI's suspicions and ultimately led to their arrest and expulsion (along with the rest of the Russian illegals network).
Ironically, this was not the first time that Russian/Soviet intelligence has been burned by sloppy OTP practices. The first was, more famously, the disastrous re-use of OTPs discovered and exploited in the Venona intercepts.
One time pads can be a cryptographic landmine. They have a very attractive property - provable security! - but at the cost of unforgiving operational assumptions that can be hard to meet in practice. OTPs have long been a favorite of hucksters selling supposedly "unbreakable" encryption software. So remember this story next time someone tries to sell you their super-secure one-time-pad-based crypto scheme. If actual Russian spies can't use it securely, chances are neither can you.
Anyway, as they say on the radio...
FINAL
FINAL