5331 private links
Generate long passwords that are easy to remember.
Inspired by the xkcd comic.
If you're confused, don't worry; you're in good company; even security "experts" don't understand the comic:
- Bruce Schneier thinks that dictionary attacks make this method "obsolete", despite the comic assuming perfect knowledge of the user's dictionary from the get-go. He advocates his own low-entropy "first letters of common plain English phrases" method instead: Schneier original article and rebuttals: 1 2 3 4 5 6
- Steve Gibson basically gets it, but calculates entropy incorrectly in order to promote his own method and upper-bound password-checking tool: Steve Gibson Security Now transcript and rebuttal
- Computer security consultant Mark Burnett almost understands the comic, but then advocates adding numerals and other crud to make passphrases less memorable, which completely defeats the point (that it is human-friendly) in the first place: Analyzing the XKCD Passphrase Comic
- Ken Grady incorrectly thinks that user-selected sentences like "I have really bright children" have the same entropy as randomly-selected words: Is Your Password Policy Stupid?
- Diogo Mónica is correct that a truly random 8-character string is still stronger than a truly random 4-word string (52.4 vs 44), but doesn't understand that the words have to be truly random, not user-selected phrases like "let me in facebook": Password Security: Why the horse battery staple is not correct
- Ken Munro confuses entropy with permutations and undermines his own argument that "correct horse battery staple" is weak due to dictionary attacks by giving an example "strong" password that still consists of English words. He also doesn't realize that using capital letters in predictable places (first letter of every word) only not increases password strength by a bit (figuratively and literally): CorrectHorseBatteryStaple isn’t a good password. Here’s why.
Sigh. 🤦♂️
We show how a malicious learner can plant an undetectable backdoor into a classifier. On the surface, such a backdoored classifier behaves normally, but in reality, the learner maintains a mechanism for changing the classification of any input, with only a slight perturbation. Importantly, without the appropriate “backdoor key,” the mechanism is hidden and cannot be detected by any computationally-bounded observer. We demonstrate two frameworks for planting undetectable backdoors, with incomparable guarantees.
First, we show how to plant a backdoor in any model, using digital signature schemes. The construction guarantees that given query access to the original model and the backdoored version, it is computationally infeasible to find even a single input where they differ. This property implies that the backdoored model has generalization error comparable with the original model. Moreover, even if the distinguisher can request backdoored inputs of its choice, they cannot backdoor a new input a property we call non-replicability. //
Turns out that securing ML systems is really hard.
It is evidence that the move to use artificial intelligence chatbots like this to provide results for web searches is happening too fast, says Carissa Véliz at the University of Oxford. “The possibilities for creating misinformation on a mass scale are huge,” she says.
…Véliz says the error, and the way it slipped through the system, is a prescient example of the danger of relying on AI models when accuracy is important.
“It perfectly shows the most important weakness of statistical systems. These systems are designed to give plausible answers, depending on statistical analysis – they’re not designed to give out truthful answers,” she says.
The tax code isn’t software. It doesn’t run on a computer. But it’s still code. It’s a series of algorithms that takes an input—financial information for the year—and produces an output: the amount of tax owed. It’s incredibly complex code; there are a bazillion details and exceptions and special cases. It consists of government laws, rulings from the tax authorities, judicial decisions, and legal opinions.
Like computer code, the tax code has bugs. They might be mistakes in how the tax laws were written. They might be mistakes in how the tax code is interpreted, oversights in how parts of the law were conceived, or unintended omissions of some sort or another. They might arise from the exponentially huge number of ways different parts of the tax code interact. //
Here’s my question: what happens when artificial intelligence and machine learning (ML) gets hold of this problem? We already have ML systems that find software vulnerabilities. What happens when you feed a ML system the entire U.S. tax code and tell it to figure out all of the ways to minimize the amount of tax owed? Or, in the case of a multinational corporation, to feed it the entire planet’s tax codes? What sort of vulnerabilities would it find? And how many? Dozens or millions?
In 2015, Volkswagen was caught cheating on emissions control tests. It didn’t forge test results; it got the cars’ computers to cheat for them. Engineers programmed the software in the car’s onboard computer to detect when the car was undergoing an emissions test. The computer then activated the car’s emissions-curbing systems, but only for the duration of the test. The result was that the cars had much better performance on the road at the cost of producing more pollution.
ML will result in lots of hacks like this. They’ll be more subtle. They’ll be even harder to discover. It’s because of the way ML systems optimize themselves, and because their specific optimizations can be impossible for us humans to understand. Their human programmers won’t even know what’s going on.
Any good ML system will naturally find and exploit hacks. This is because their only constraints are the rules of the system. If there are problems, inconsistencies, or loopholes in the rules, and if those properties lead to a “better” solution as defined by the program, then those systems will find them. The challenge is that you have to define the system’s goals completely and precisely, and that that’s impossible.
The tax code can be hacked. Financial markets regulations can be hacked. The market economy, democracy itself, and our cognitive systems can all be hacked. Tasking a ML system to find new hacks against any of these is still science fiction, but it’s not stupid science fiction. And ML will drastically change how we need to think about policy, law, and government. Now’s the time to figure out how.
The gaggle of Google employees peered at their computer screens in bewilderment. They had spent many months honing an algorithm designed to steer an unmanned helium balloon all the way from Puerto Rico to Peru. But something was wrong. The balloon, controlled by its machine mind, kept veering off course.
Salvatore Candido of Google's now-defunct Project Loon venture, which aimed to bring internet access to remote areas via the balloons, couldn't explain the craft’s trajectory. His colleagues manually took control of the system and put it back on track.
It was only later that they realised what was happening. Unexpectedly, the artificial intelligence (AI) on board the balloon had learned to recreate an ancient sailing technique first developed by humans centuries, if not thousands of years, ago. "Tacking" involves steering a vessel into the wind and then angling outward again so that progress in a zig-zag, roughly in the desired direction, can still be made.
Under unfavourable weather conditions, the self-flying balloons had learned to tack all by themselves. The fact they had done this, unprompted, surprised everyone, not least the researchers working on the project.
"We quickly realised we'd been outsmarted when the first balloon allowed to fully execute this technique set a flight time record from Puerto Rico to Peru," wrote Candido in a blog post about the project. "I had never simultaneously felt smarter and dumber at the same time."
This is just the sort of thing that can happen when AI is left to its own devices. Unlike traditional computer programs, AIs are designed to explore and develop novel approaches to tasks that their human engineers have not explicitly told them about.
But while learning how to do these tasks, sometimes AIs come up with an approach so inventive that it can astonish even the people who work with such systems all the time. That can be a good thing, but it could also make things controlled by AIs dangerously unpredictable – robots and self-driving cars could end up making decisions that put humans in harm's way. //
Video game AI researcher Julian Togelius at the New York University Tandon School of Engineering can explain what's going on here. He says these are classic examples of "reward allocation" errors. When an AI is asked to accomplish something, it may uncover strange and unexpected methods of achieving its goal, where end always justifies the means. We humans rarely take such a stance. The means, and the rules that govern how we ought to play, matter.
Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries.
“Threat researchers are used to seeing a moderate flow of malvertising via Google Ads,” volunteers at Spamhaus wrote on Thursday. “However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not ‘the norm.’”
The technique is called DNS-0x20 encoding, in reference to the hexadecimal number 0x20 (32 in decimal) and its relationship to ASCII characters. Its binary representation (0b100000) has all of its bits set to zero except for the fifth, counting from zero – which for ASCII characters determines whether a letter is upper or lower case. For example, 01000001 (65 in decimal) is the ASCII code for an upper-case A, while 01100001 (decimal 97) is the ASCII code for a lower-case a.
Described in more detail in an an academic paper [PDF], DNS-0x20 encoding expands the range of possibilities an attacker must guess without confusing the resolution of DNS names and IP addresses.
Essentially, you randomly toggle the 0x20 bit in a query to jumble up the case, send that out to be resolved, and expect the response to have the same matching case. If the cases don't match, you may be caught up in a cache poisoning attack, as the attacker won't know which case bits will be set or cleared by you when doing their poisoning.
In summary: Secure messaging wasn't pervasive at all, and the existing options were either overly technical, had a bad user experience or never made it out of beta. That's why Manuel wrote the first version of Threema for himself and his friends, and released it for iOS in December 2012. //
On the protocol side: In 2012, TLS was in a bit of a bad state. Mobile operating systems commonly offered no modern ciphersuites at all and were sometimes plagued with bad random number generators. (For example: Android 4.0.4 didn't even support TLS 1.1 yet.)
Moz in Oz • December 26, 2022 8:35 PM
Writing down the master password is all but essential if there’s anything important in your password database. The lawyer who did my wills (living and dead) was adamant about that. There are fun crypto system to let you distribute bits of a password around so that it’s harder for people who have other things to think about to make it work at all. Meanwhile you’re in a coma and the bailiffs are selling your house, “comes with a ready-made family for the lucky buyer”. Write the bloody thing down, put it in a safe place. My lawyer has half the password plus a list of people who each have a copy of the other half. And they have a copy of the file from ~2 years ago, and know how to get the latest one off my website(s), and that my work has a copy of it.
Security is always a balance, and I’ve been around long enough to have seen a few too many “Bob died so his website is gone forever”, not to mention seen families wandering lost in technology wondering whether Bob really had investments at all, or were they concealing a gambling problem (trick question, it was both: they invested in cryptocurrency). If no-one knows where you invested they can’t use your death to access those funds.
Last August, LastPass reported a security breach, saying that no customer information—or passwords—were compromised. Turns out the full story is worse: //
To date, we have determined that once the cloud storage access key and dual storage container decryption keys were obtained, the threat actor copied information from backup that contained basic customer account information and related metadata including company names, end-user names, billing addresses, email addresses, telephone numbers, and the IP addresses from which customers were accessing the LastPass service.
The threat actor was also able to copy a backup of customer vault data from the encrypted storage container which is stored in a proprietary binary format that contains both unencrypted data, such as website URLs, as well as fully-encrypted sensitive fields such as website usernames and passwords, secure notes, and form-filled data.
That’s bad. It’s not an epic disaster, though.
These encrypted fields remain secured with 256-bit AES encryption and can only be decrypted with a unique encryption key derived from each user’s master password using our Zero Knowledge architecture. As a reminder, the master password is never known to LastPass and is not stored or maintained by LastPass. //
John Thurston • December 26, 2022 1:31 PM
“I think the question of why everything in the credentials store was not encrypted is interesting. What possible advantage is there of not just encrypting the whole thing under your master password.”
Because this is how Lastpass is able to offer to supply uid:pwd values when you have not unlocked your vault. If this information was kept encrypted, then the browser extensions would not know when to prompt you to unlock to supply the creds.
I’ve never liked this ‘feature’, but there’s nothing I can do about it. //
Wladimir Paöant • December 27, 2022 6:56 AM
I would have been less problematic had LastPass not messed up. They:
- Failed to upgrade many accounts from 5,000 to 100,100 iterations.
- Didn’t keep up with cracking hardware improvements (100k iterations are really on the lower end today).
- Didn’t bother existing their new password complexity rules for existing accounts.
- Didn’t bother encrypting URLs despite being warned about it continuously, allowing attackers to determine which accounts are worth the effort to decrypt.
Their statement is misleading, they downplay the issues. I’ve summed it up on my blog here: https://palant.info/2022/12/26/whats-in-a-pr-statement-lastpass-breach-explained/ //
Having worked in web security for years, I know how hard it is to get authentication right, especially when users will find ingenious ways to defeat your system, such as storing their “do not store these codes on your phone” two-factor authentication (2FA) codes on the phone and then throwing the phone in the ocean. Another user surprised me when, instead of properly setting up their authenticator app, they brilliantly used one of the ten backup codes to finish their 2FA setup (and didn’t even store the rest), thus locking themselves out of their account immediately. I fixed that bug immediately and found new respect for the bug-finding abilities of users.
Those (and many more) occurrences have made it painfully obvious to me that securing an authentication system is very hard UX, and, since the user is always right, we need to find ways to make systems that are both secure and easy to use. While working for my previous employer, an encrypted communications company called Silent Circle, we had to find ways to solve this problem, and we arrived at something I believe provides a very good balance between security and usability. I will explain how this system works, and urge you to implement something similar for your authentication, especially if it’s protecting high-value accounts like Playstation Network’s.
Clive Robinson • September 15, 2022 3:03 AM
@ Winter,
Re : Blockchain efficiency
“Blockchains are transparent, robust, and fast.”
No they are not.
To be transparent they need to be “public” and few people actually want every financial move they make being made naked and open to all.
Whilst they look like they are robust they are not as data structures, or systems. The only robustness they bring to the table over existing systems is by the public duplication. Which is problematical as who is going to pay for the infrastructure many times that of Googles current setup, just to implement one such? Remember you would need a minimum of four such systems[1] and all the high security communications to support it, which would make the NSA envious.
As for fast, the current systems due to the moronic “Proof of XXX” attached are so slow transactions are at best just a handful a second. Even without that “Proof of XXX” the number of global transactions at any one time numbers up in the tens of millions a second, something most do not realise.
But people do not appreciate the combined,
- Gate Keeper Effect
- Ripple across Effect.
This will create a significant time delay which has consequences in that high speed transactions can be done and compleated long before the blockchain gets updated, thus “High Frequency Fraud” will be a result. This will require “back-out” mechanisms that don’t exist because they destroy the blockchain security model.
Then of course any system with locked in time delay and capacity issues, is a “Sitting Duck” target for extortion by “Denial of Service”(DoS) attack.
To be honest I’m surprised there has been no real concerted effort to Ransom one of the crypto-coin blockchains by a DDoS…
As has been pointed out the idea of a global blockchain is a “Crypto-Anarchists” dream and every one elses very real nightmare.
Because like it or not, it will become not some kind of libertarian freedom, but a tool of near total oppression as it will have all the failings of hierarchical systems[2] that certain entities will lust after to control. We actually see this with blockchain gate keepers already.
[1] There is a problem with blockchains in that if someone gets more than 50% control they can “own it”. This means you need three at all times sharing effectively equitably. Add in the fact “at all times” need 100% availability, and no single system has 100% reliability means you need an absolut minimum of four, preferably more.
[2] Mankind has known many of the failings of single and hierarchical systems for as long as there has been any kind of social structure. War is just one obvious side effect, slavery or forced servitude yet another the list of hierarchical system failings is both long and grevious. For centuries at the very least people have sort out ways to robustly maintain the desirable effects of social cohesion, yet get rid of hierarchies, or atleast their many undesirable side effects, and so far the failure to do this is effectively 100%…
Facebook’s stonewalling has been revealing on its own, providing variations on the same theme: It has amassed so much data on so many billions of people and organized it so confusingly that full transparency is impossible on a technical level. In the March 2022 hearing, Zarashaw and Steven Elia, a software engineering manager, described Facebook as a data-processing apparatus so complex that it defies understanding from within. The hearing amounted to two high-ranking engineers at one of the most powerful and resource-flush engineering outfits in history describing their product as an unknowable machine.
The special master at times seemed in disbelief, as when he questioned the engineers over whether any documentation existed for a particular Facebook subsystem. “Someone must have a diagram that says this is where this data is stored,” he said, according to the transcript. Zarashaw responded: “We have a somewhat strange engineering culture compared to most where we don’t generate a lot of artifacts during the engineering process. Effectively the code is its own design document often.” He quickly added, “For what it’s worth, this is terrifying to me when I first joined as well.”
[…]
Facebook’s inability to comprehend its own functioning took the hearing up to the edge of the metaphysical. At one point, the court-appointed special master noted that the “Download Your Information” file provided to the suit’s plaintiffs must not have included everything the company had stored on those individuals because it appears to have no idea what it truly stores on anyone. Can it be that Facebook’s designated tool for comprehensively downloading your information might not actually download all your information? This, again, is outside the boundaries of knowledge.
“The solution to this is unfortunately exactly the work that was done to create the DYI file itself,” noted Zarashaw. “And the thing I struggle with here is in order to find gaps in what may not be in DYI file, you would by definition need to do even more work than was done to generate the DYI files in the first place.”
The systemic fogginess of Facebook’s data storage made answering even the most basic question futile. At another point, the special master asked how one could find out which systems actually contain user data that was created through machine inference.
“I don’t know,” answered Zarashaw. “It’s a rather difficult conundrum.”
Much is known about how the federal government leverages location data by serving warrants to major tech companies like Google or Facebook to investigate crime in America. However, much less is known about how location data influences state and local law enforcement investigations. It turns out that's because many local police agencies intentionally avoid mentioning the under-the-radar tech they use—sometimes without warrants—to monitor private citizens.
As one Maryland-based sergeant wrote in a department email, touting the benefit of "no court paperwork" before purchasing the software, "The success lies in the secrecy."
This week, an investigation from the Electronic Frontier Foundation and Associated Press—supported by the Pulitzer Center for Crisis Reporting—has made public what could be considered local police's best-kept secret. Their reporting revealed the potentially extreme extent of data surveillance of ordinary people being tracked and made vulnerable just for moving about small-town America.
After I tell people not to use easily phishable MFA, the first question they ask is what is and is not easily phishable? I have written dozens of articles explaining the types of MFA solutions which are easily phished and bypassed, including the precursor companion article for this article, explaining why you should not use easily phishable MFA, which is located here: https://www.linkedin.com/pulse/dont-use-easily-phishable-mfa-thats-most-roger-grimes
The most common question I get is which MFA solutions are not so easy to phish and bypass?
This article lists MFA solutions and types which appear to be phishing-resistant.
I’ve been saying that complexity is the worst enemy of security for a long time now. (Here’s me in 1999.) And it’s been true for a long time.
In 2018, Thomas Dullien of Google’s Project Zero talked about “cheap complexity.” Andrew Appel summarizes:
The anomaly of cheap complexity. For most of human history, a more complex device was more expensive to build than a simpler device. This is not the case in modern computing. It is often more cost-effective to take a very complicated device, and make it simulate simplicity, than to make a simpler device. This is because of economies of scale: complex general-purpose CPUs are cheap. On the other hand, custom-designed, simpler, application-specific devices, which could in principle be much more secure, are very expensive.
This is driven by two fundamental principles in computing: Universal computation, meaning that any computer can simulate any other; and Moore’s law, predicting that each year the number of transistors on a chip will grow exponentially. ARM Cortex-M0 CPUs cost pennies, though they are more powerful than some supercomputers of the 20th century.
The same is true in the software layers. A (huge and complex) general-purpose operating system is free, but a simpler, custom-designed, perhaps more secure OS would be very expensive to build. Or as Dullien asks, “How did this research code someone wrote in two weeks 20 years ago end up in a billion devices?”
This is correct. Today, it’s easier to build complex systems than it is to build simple ones. As recently as twenty years ago, if you wanted to build a refrigerator you would create custom refrigerator controller hardware and embedded software. Today, you just grab some standard microcontroller off the shelf and write a software application for it. And that microcontroller already comes with an IP stack, a microphone, a video port, Bluetooth, and a whole lot more. And since those features are there, engineers use them. //
Rob K • August 26, 2022 8:40 AM
“it’s easier to build complex systems than it is to build simple ones”
I think this is better stated as it’s easier to re-use existing complex systems than it is to build simple ones from scratch
Corn-y demo heralded as right-to-repair win //
"Turns out our entire food system is built on outdated, unpatched Linux and Windows CE hardware with LTE modems." //
And he also wondered aloud whether John Deere has complied with the terms of the GPL, now that it appears the company incorporates GPL code into its products without meeting its source code disclosure obligations.
Exploit now provides root access to two popular models of the company’s farm equipment. ///
Also, JD in violation of the GPL?
The most important defense is remaining humble and not falling into the mindset that we would never get pulled in by a phisher. Phishers are more sophisticated than we may think. They come up with new tricks all the time. It's only a matter of time until one of them throws us off balance.
As Cloudflare officials wrote in their disclosure: "Having a paranoid but blame-free culture is critical for security. The three employees who fell for the phishing scam were not reprimanded. We're all human, and we make mistakes. It's critically important that when we do, we report them and don't cover them up."