5331 private links
Each time the SSH client connects with a server, it will store a related signature (a key) of the server. This information is stored in a file names named known_hosts. The known_hosts file itself is available in the .ssh subdirectory of the related user (on the client). In the case the signature of the server changes, SSH will protect the user by notifying about this chance.
Risk involved
This configuration option is very useful, but also introduces a new risk. Previously it was common to store the hostname related with the key. The result is a “picture” of the network, revealing which systems are connected. This made it easy for worms and other malicious scripts to use this information and spread to other systems, once they had a single system compromised.
Improve security
To reduce the risk of storing a clear picture of the network, the solution introduced was hashing the hostname. To enable this functionality, the HashKnownHosts option can be set to yes. This option can be found in the system-wide SSH client configuration file, which is usually /etc/ssh/ssh_config.
The final result of hashing entries will look something like this:
The hostname (hashed with ecdsa-sha2-nistp256) is unreadable for the human eye or malicious scripts. For each new connection to the related host, the hashing algorithm will result in the same string. This way the client knows it already has a stored key and compare it during the handshaking process with the server.
I just published a new paper with Karen Levy of Cornell: "Privacy Threats in Intimate Relationships."
Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
This is an important issue that has gotten much too little attention in the cybersecurity community. //
As was once pointed out by someone way more famous than the rest of us,
"Three can keep a secret as long as the other two are dead." //
lurker • June 5, 2020 6:01 PM
@Rj
This was Samson's mistake in Judges 14:18.
and again [!] in Judges 16:17. //
myliit • June 5, 2020 6:30 PM
Our host in the making. From the OP.
“One of us (Bruce) remembers that as a child he once brute-forced a combination padlock in his house. A four-digit lock’s 10,000 possible combinations might be enough to keep out a burglar, but fail against a child with unlimited access and nothing better to do that day.”
There’s a story going around the internet about Ebay port scanning its visitors without any permission or even indication that it’s happening (without digging into the browser’s developer tools) - it’s absolutely true. But why are they doing it, and what is Ebay doing with the data it collects? I took a peek into the code to find out (adapted from this thread), and I’m sure what I found is only a small part of the full story. //
It’s not just Ebay scanning your ports, there is allegedly a network of 30,000 websites out there all working for the common aim of harvesting open ports, collecting IP addresses, and User Agents in an attempt to track users all across the web. And this isn’t some rogue team within Ebay setting out to skirt the law, you can bet that LexisNexis lawyers have thoroughly covered their bases when extending this service to their customers (at least in the U.S.).
Let us begin with the sincere hope that you do
not have an early demise.
On the other hand, what if you did have a fatal
car accident today? Would your business partner
or mate know how to access critical accounts,
including your on-line banking and brokerage
accounts? Or, would your family know what to
do with your email, facebook, or other web
access points?
The smart solution is to ensure that your key
accounts and passwords are located in the same
place your Will and Power of Attorney docu-
ments are stored. In fact, a complete listing of
all your accounts and policies will aid your fam-
ily in the event you are incapacitated or die.
For sensitive materials, including your financial
accounts, nominate someone you truly trust,
include them in your Will or Power of Attorney,
and make sure they know where to find the
information if needed. Also, check with the
institution to ensure the paperwork is in order to
allow them access without major legal gymnast-
ics (often, this means a “joint account with right
of survivorship”).
For most people, the best solution is to leave a
sealed envelope with a trusted person (family
attorney or your best friend?), to be opened only
in case of death. (You will need to update such a
letter every so often as you change passwords
and/or accounts.)
Even if you have saved your data files to a hard
drive, you still have to be cautious – any IT pro
will tell you the only question about hard drive
failure is when it will happen, not if!
BACKUP OPTIONS
That is the key reason why it is important to
backup the backup.
Possibly you have heard of the 3-2-1 Backup
Rule: have three copies of anything important,
in two different formats (hard drive, solid state
drive, CD, DVD, etc), with one of the copies at
another location, away from your site.
This is good advice, and exactly where you keep
the backup is important.
Why? Just think for a moment about the fires in
California, or the flooding caused by Hurricane
Sandy in the Northeast, or that in the Carolina’s
last year.
If critical backups were merely located in a
different office in the same building, or at a
neighboring site, there would be a great chance
that any such backups would be destroyed, too.
And not only the business data – family records
and pictures easily could be destroyed.
DO NOT LEAVE
WITH YOUR PASSWORDS
By Gil Gillivan
Great piece on not dying with your passwords.
So very true. And a nice reminder of the 3-2-1 backup.
US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February. //
It's interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal -- and probably required some complicated declassification maneuvering.Me, I like reading the codenames.
The plaintiffs wanted to investigate possible racial discrimination in online job markets by creating accounts for fake employers and job seekers. Leading job sites have terms of service prohibiting users from supplying fake information, and the researchers worried that their research could expose them to criminal liability under the CFAA, which makes it a crime to "access a computer without authorization or exceed authorized access."
So in 2016 they sued the federal government, seeking a declaration that this part of the CFAA violated the First Amendment.
But rather than addressing that constitutional issue, Judge John Bates ruled on Friday that the plaintiffs' proposed research wouldn't violate the CFAA's criminal provisions at all. Someone violates the CFAA when they bypass an access restriction like a password. But someone who logs into a website with a valid password doesn't become a hacker simply by doing something prohibited by a website's terms of service, the judge concluded.
"Criminalizing terms-of-service violations risks turning each website into its own criminal jurisdiction and each webmaster into his own legislature," Bates wrote.
Bates noted that website terms of service are often long, complex, and change frequently. While some websites require a user to read through the terms and explicitly agree to them, others merely include a link to the terms somewhere on the page. As a result, most users aren't even aware of the contractual terms that supposedly govern the site. Under those circumstances, it's not reasonable to make violation of such terms a criminal offense, Bates concluded.
https://ecf.dcd.uscourts.gov/cgi-bin/show_public_doc?2016cv1368-67
The company has seen a 535% rise in daily traffic in the past month, but security researchers say the app is a ‘privacy disaster’
Prepare for another attack on encryption in the U.S. The EARN-IT Act purports to be about protecting children from predation, but it's really about forcing the tech companies to break their encryption schemes:
The Graham-Blumenthal bill would finally give Barr the power to demand that tech companies obey him or face serious repercussions, including both civil and criminal liability. Such a demand would put encryption providers like WhatsApp and Signal in an awful conundrum: either face the possibility of losing everything in a single lawsuit or knowingly undermine their users' security, making all of us more vulnerable to online criminals. //
Matthew Green has a long explanation of the bill and its effects:
The new bill, out of Lindsey Graham's Judiciary committee, is designed to force providers to either solve the encryption-while-scanning problem, or stop using encryption entirely. And given that we don't yet know how to solve the problem -- and the techniques to do it are basically at the research stage of R&D -- it's likely that "stop using encryption" is really the preferred goal. //
So in short: this bill is a backdoor way to allow the government to ban encryption on commercial services. And even more beautifully: it doesn't come out and actually ban the use of encryption, it just makes encryption commercially infeasible for major providers to deploy, ensuring that they'll go bankrupt if they try to disobey this committee's recommendations.
It's the kind of bill you'd come up with if you knew the thing you wanted to do was unconstitutional and highly unpopular, and you basically didn't care. //
Undermining trust is a dangerous thing. Remember that.
The Whisper Secret-Sharing App Exposed Locations
This is a big deal:
Whisper, the secret-sharing app that called itself the "safest place on the Internet," left years of users' most intimate confessions exposed on the Web tied to their age, location and other details, raising alarm among cybersecurity researchers that users could have been unmasked or blackmailed.
[...]
The records were viewable on a non-password-protected database open to the public Web. A Post reporter was able to freely browse and search through the records, many of which involved children: A search of users who had listed their age as 15 returned 1.3 million results.
[...]
The exposed records did not include real names but did include a user's stated age, ethnicity, gender, hometown, nickname and any membership in groups,
A guy walks into a bar full of nerds and says, "how do I secure my Windows 10 PC?" and the nerds reply, "install Linux." Funny if you are a nerd, but for everyone else, here are eight simple steps to a more secure Windows 10 computer.
Joshua Schulte, the CIA employee standing trial for leaking the Wikileaks Vault 7 CIA hacking tools, maintains his innocence. And during the trial, a lot of shoddy security and sysadmin practices are coming out:
All this raises a question, though: just how bad is the CIA's security that it wasn't able to keep Schulte out, even accounting for the fact that he is a hacking and computer specialist? And the answer is: absolutely terrible.
The password for the Confluence virtual machine that held all the hacking tools that were stolen and leaked? That'll be 123ABCdef. And the root login for the main DevLAN server? mysweetsummer.
It actually gets worse than that. Those passwords were shared by the entire team and posted on the group's intranet. IRC chats published during the trial even revealed team members talking about how terrible their infosec practices were, and joked that CIA internal security would go nuts if they knew. Their justification? The intranet was restricted to members of the Operational Support Branch (OSB): the elite programming unit that makes the CIA's hacking tools.
The jury returned no verdict on the serous charges. He was convicted of contempt and lying to the FBI; a mistrial on everything else.
US-based Firefox users get encrypted DNS lookups today or within a few weeks. //
I am of two minds on the privacy benefits of DoH/DoT, but my current feeling is that it's not worth bothering with because the benefits don't fit the common use cases.
On one hand, the idea of concealing your DNS lookups from your ISP feels like a positive one. Your ISP can still sniff your SNI requests and see where you're browsing, so it doesn't necessarily gain you any privacy, but it does at least make it more difficult for them to casually spy on you and aggregate your DNS lookups into a salable package.
On the other hand, giving all of your DNS lookups to Cloudflare or NextDNS potentially allows Cloudflare or NextDNS to....casually spy on you and aggregate your DNS lookups into a salable package. And your ISP can still see your SNI requests. So in a way, you're potentially inviting more people to watch you, not fewer.
I used DoH for most of last year, but there's a pretty strong argument to be made that you're better off running your own local recursive resolver with qname minimization enabled. This means your DNS requests are not encrypted, but it also means that you're directly doing the entire lookup yourself, which greatly reduces your vulnerability to dns poisoning.
More to the point, I'm no longer certain there's much benefit at all of obscuring your DNS lookups if the purpose of that obfuscation is to hide activity from your ISP. A bit more than 95% of sites have a unique page-load fingerprint and that makes figuring out what site you're visiting solely by IP address a trivial task regardless of DNS obfuscation.
With all of that in mind, I've ditched DoH/DoT and just set up unbound in full recursion mode. It's fast and it works great.
Markey was against forcing encrypted phone providers to implement the NSA's Clipper Chip in their devices, but wanted us to reach a compromise with the FBI regardless. This completely startled us techies, who thought having the right answer was enough. It was at that moment that I learned an important difference between technologists and policy makers. Technologists want solutions; policy makers want consensus. //
Policy is often driven by exceptional events, like the FBI's desire to break the encryption on the San Bernardino shooter's iPhone. (The PATRIOT Act is the most egregious example I can think of.) Technologists tend to look at more general use cases, like the overall value of strong encryption to societal security. Policy tends to focus on the past, making existing systems work or correcting wrongs that have happened. It's hard to imagine policy makers creating laws around VR systems, because they don't yet exist in any meaningful way. Technology is inherently future focused. Technologists try to imagine better systems, or future flaws in present systems, and work to improve things.
As technologists, we iterate. It's how we write software. It's how we field products. We know we can't get it right the first time, so we have developed all sorts of agile systems to deal with that fact. Policy making is often the opposite. U.S. federal laws take months or years to negotiate and pass, and after that the issue doesn't get addressed again for a decade or more. It is much more critical to get it right the first time, because the effects of getting it wrong are long lasting. (See, for example, parts of the GDPR.) Sometimes regulatory agencies can be more agile. The courts can also iterate policy, but it's slower. //
In October, I attended the first ACM Symposium on Computer Science and the Law. Google counsel Brian Carver talked about his experience with the few computer science grad students who would attend his Intellectual Property and Cyberlaw classes every year at UC Berkeley. One of the first things he would do was give the students two different cases to read. The cases had nearly identical facts, and the judges who'd ruled on them came to exactly opposite conclusions. The law students took this in stride; it's the way the legal system works when it's wrestling with a new concept or idea. But it shook the computer science students. They were appalled that there wasn't a single correct answer.
If Sen. Lindsey Graham gets his way, the federal government will launch another attack on online privacy. The South Carolina Republican will ask lawmakers to give Attorney General William Barr and the Department of Justice unchecked access to all of your messaging, file-sharing, and video-sharing…
In the 2018 midterm elections, West Virginia became the first state in the U.S. to allow select voters to cast their ballot on a mobile phone via a proprietary app called "Voatz." Although there is no public formal description of Voatz's security model, the company claims that election security and integrity are maintained through the use of a permissioned blockchain, biometrics, a mixnet, and hardware-backed key storage modules on the user's device. In this work, we present the first public security analysis of Voatz, based on a reverse engineering of their Android application and the minimal available documentation of the system. We performed a clean-room reimplementation of Voatz's server and present an analysis of the election process as visible from the app itself.
We find that Voatz has vulnerabilities that allow different kinds of adversaries to alter, stop, or expose a user's vote,including a sidechannel attack in which a completely passive network adversary can potentially recover a user's secret ballot. We additionally find that Voatz has a number of privacy issues stemming from their use of third party services for crucial app functionality. Our findings serve as a concrete illustration of the common wisdom against Internet voting,and of the importance of transparency to the legitimacy of elections. //
The company's response is a perfect illustration of why non-computer non-security companies have no idea what they're doing, and should not be trusted with any form of security.
Does anyone know how exactly they backdoored the machines?
Yes and no, what we do know is that some of the machines had to be both secure and insecure, so that they could interoperate without raising any "red flags" by not interworking with secure machines...
We know from the book "Spy Catcher" written by Peter Wright published in the early 1980's that one method was to supply an "algorithmicaly secure machine" but with an "acoustic side channel" that leqked key information in it. Basically MI5 had gained audio access via an "infinity device" to the "Crypto Cell" at the Egyptian Embassy in London. They could thus hear the mechanical cipher machine running. Whilst it did not give the "key" what it did do was give the "wheel" starting points, turn over points, and which were rotated at any time. This reduced the "attack space" GCHQ had to deal with from "months to minutes".
As for effecting the "key stream" back in WWII the stratigic not tactical German high level cipher machine was the Lorenz teletype cipher machine. It used 12 cipher wheels with "movable lugs" on the wheel periphery, that caused a "key stream" to be built by XORing the lug positions. The wheels sizes were essentialy "prime to each other" thus whilst they were only 30-60 steps each their combined sequence was the multiple of their step sizes which was immense. At Bletchly Park the traffic from these machines was codenamed "Fish" and the machine "Tunny". The work of two men broke the machine sight unseen due to a mistake made by a German operator. There are various pages up on the web that will give you as little or as much information on it as you would like.
But what you need to remember is that,
1, The failings of the Lorenz machine are shared by many other machine ciphers not just mechanical ones.
2, Virtually all machine ciphers pre AES have both strong and weak keys with a range in between.
The US Field Cipher based on the Boris Haglin coin counting mechanism suffered from the second issue, in fact it had rather more weak keys than strong. This was not a problem for the US military as they "Issued key scheduals centrally" thus knowing what were strong keys and what were weak keys they only ever used the strong keys. The knowledge of weak and strong was as far as we can tell worked out by William F. Friedman, and it was deliberatly implemented as such by him. That is, the big weakness of any field cipher machine is the enemy will capture it and may well end up using it or copy it's design to make their own machines (see the history of Enigma type "rotor" machines to see that in action).
Thus the reasoning was either the enemy is smart and will know about the strong keys and weak keys in which case nothing won or lost. However if they do not and assume all keys are the same, then your cryptanalysis team has just been given a great big bonus to make thier lifes easier. What was not known then and still not widely recognised was the British invention of Traffic Analysis in all it's forms and the huge card file database they used with it. This enabled them to identify specific traffic circuits and individual operators without the use of cryptanalysis. Which gave not just vast amounts of "probable plaintext" but also "probable cillies" and other bad operator habits. All of which made breaking of even strong keys very very much easier. Thus traffic under weak keys becomes a leaver to put in the cracks of strong keys...
What is also known is that Crypto AG supplied customers not just with the actual crypto machines but a whole lot of key generation support... This was in the form of manuals and machines, all of which pushed Crypto AG customers into producing either "weak key scheduals" or "known key scheduals" but the actual encryption machines worked identically to those who used "secure key scheduals" thus were fully compatible, so no red flags raised.
The thing that we forget these days is that designing crypto kit is actually a hard process. Whilst it's easy to come up with complex algorithms, they are almost impossible to implement in a mechanical system that is reliable in use. Likewise for their pencil and paper analogs. Also they are eye wateringly expensive to make. If you are ever lucky enough to get your hands on just a single Enigma rotor you will see it is superbly engineered from many many parts each one of which requires a great deal of engineering thus there are hundreds of hours of work in each Enigma machine even though the outer wooden box might look crude to modern eyes. Thus only fairly simple algorithms got implemented based on minor variations to odometer or coin counting mechanisms.
Untill DES came along nearly all "electronic" cipher machines were based on simple circuits like shift registers and SR latches. In most respects many were just simple copies of mechanical cipher algorithms. So the likes of a Lorenz wheel became a "ring counter with reset" and the lugs replaced by a "plug board" the algorithm remained the same, along with all it's weaknesses... Even when put in software in 4 and 8 bit CPU systems or later micro controlers those old defective mechanical algorithms came along as "counters mod N" driving "lookup tables"... In part this happened due to "inventory costs" if you've invested a fortune in mechanical cipher systems you want your new shiny electronic systems to be compatible, likewise those that are CPU based. It's the same old "legacy issue" that almost always works more for your enemy than it does for your security.
But acoustic side channels are known to be not the only ones. Even theoreticaly secure One Time Pad/Tape systems are practically insecure when implemented in machine form. The UK high level super encipherment machine known as Rockex used by the Diplomatic Wireless Service (DWS) and designed by Canadian engineer "Pat" Bailey suffered from this as I mentioned years ago on this blog. In essence the Pad/Tape "additive" was done in a circuit using Post Office Type 600 relays. Even though the open to close times could be adjusted there was always a slight time asymmetry that got out onto the telephone pair used to connect to the telex network. This time asymmetry could be used to determine the "addative" thus strip it off leaving the plaintext...
One solution to this is to use a "shift register" or secondary relay that "reclocked" the data signal so that the time asymmetry seen on the line was not that of the relay doing the encipherment, but the time asymmetry of the reclocking relay. In essence the contacts of the reclocking relay were "open" during the critical time period of the encipherment relay changed state.
Which in theory should have made it secure... But open relay contacts like open switch contacts can be "jumped" because in reality they are small value capacitors. This is what the "infinity device" was all about. It enabled you to put a high frequency signal on the telephone pair that would see the encryption relay change state through the open contacts of the reclocking relay... So you needed to add extra circuitry to prevent the time based side channel from the encryption relay being seen on the line. Thus leaving out that extra circuitry made a very secure system nearly totaly insecure to anyone with the appropriate device in line, yet it retained total data level compatability with it's secure counterparts, so again no "red flag" waved.
@Erwin,
"Schweizer Allzweck-Taschenmesser" = The Swiss All-Purpose Pocket-Knife.
I can't stop laughing. Does anyone else get the irony?
The Swiss cryptography firm Crypto AG sold equipment to governments and militaries around the world for decades after World War II. They were owned by the CIA:
But what none of its customers ever knew was that Crypto AG was secretly owned by the CIA in a highly classified partnership with West German intelligence. These spy agencies rigged the company's devices so they could easily break the codes that countries used to send encrypted messages.
This isn't really news. We have long known that Crypto AG was backdooring crypto equipment for the Americans. What is new is the formerly classified documents describing the details:
The decades-long arrangement, among the most closely guarded secrets of the Cold War, is laid bare in a classified, comprehensive CIA history of the operation obtained by The Washington Post and ZDF, a German public broadcaster, in a joint reporting project.
The account identifies the CIA officers who ran the program and the company executives entrusted to execute it. It traces the origin of the venture as well as the internal conflicts that nearly derailed it. It describes how the United States and its allies exploited other nations' gullibility for years, taking their money and stealing their secrets.
The operation, known first by the code name "Thesaurus" and later "Rubicon," ranks among the most audacious in CIA history.
As an early domain name investor, Mike O’Connor had by 1994 snatched up several choice online destinations, including bar.com, cafes.com, grill.com, place.com, pub.com and television.com. Some he sold over the years, but for the past 26 years O’Connor refused to auction perhaps the most sensitive domain in his stable — corp.com. It is sensitive because years of testing shows whoever wields it would have access to an unending stream of passwords, email and other proprietary data belonging to hundreds of thousands of systems at major companies around the globe.
Now, facing 70 and seeking to simplify his estate, O’Connor is finally selling corp.com. The asking price — $1.7 million — is hardly outlandish for a 4-letter domain with such strong commercial appeal. O’Connor said he hopes Microsoft Corp. will buy it, but fears they won’t and instead it will get snatched up by someone working with organized cybercriminals or state-funded hacking groups bent on undermining the interests of Western corporations.
One reason O’Connor hopes Microsoft will buy it is that by virtue of the unique way Windows handles resolving domain names on a local network, virtually all of the computers trying to share sensitive data with corp.com are somewhat confused Windows PCs. More importantly, early versions of Windows actually encouraged the adoption of insecure settings that made it more likely Windows computers might try to share sensitive data with corp.com.