5331 private links
For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.
Sometimes, locking down a laptop with the latest defenses isn't enough. ///
Typing in a password is still better than relying entirely on credentials stored in TPM.
Microsoft Defender Application Guard protects your networks and data from malicious applications running in your web browser, but you must install and activate it first. //
Activation for TPM 2.0 and HVCI were explained before, but now we will look at the activation procedures for Microsoft Defender Application Guard in Windows 10. MDAG uses virtualization-based technology to help safeguard your systems from malicious and criminal websites that you visit with your enabled web browsers like Edge, Chrome and Firefox. //
MDAG is included with Windows 10 Professional, Enterprise and Educational versions by default. MDAG is part of Windows Features for those versions, so we will have to call up the Control Panel. //
The easiest way to get to the screen we need is to type "windows features" into the search box on your Windows 10 desktop. Be sure to select the Turn Windows Features On or Off item from the search results. //
Scroll down the list of features until you see Microsoft Defender Application Guard. Place a check in the checkbox for that item and click the OK button. The MDAG application will install and then ask you to reboot to activate. //
Now that MDAG is installed and activated, it is time to check its settings. Click or tap the Start Menu button and select Settings (gear icon). On the Settings page, select Update & Security and then select the Windows Security item from the left-hand navigation bar, //
From the right windowpane, click the App & Browser Control //
The security settings under MDAG are stricter than many of us are used to, so you may find yourself wanting to make some tweaks. Click the Change Application Guard settings link on this page to see a list of security features that you may want to turn on or off depending on your activity.
KiwiSDR is hardware that uses a software-defined radio to monitor transmissions in a local area and stream them over the Internet. A largely hobbyist base of users does all kinds of cool things with the playing-card-sized devices. For instance, a user in Manhattan could connect one to the Internet so that people in Madrid, Spain, or Sydney, Australia, could listen to AM radio broadcasts, CB radio conversations, or even watch lightning storms in Manhattan.
On Wednesday, users learned that for years, their devices had been equipped with a backdoor that allowed the KiwiSDR creator—and possibly others—to log in to the devices with administrative system rights. The remote admin could then make configuration changes and access data not just for the KiwiSDR but in many cases to the Raspberry Pi, BeagleBone Black, or other computing devices the SDR hardware is connected to.
Don't want to compromise on the security of your Linux server? Install these six tools to set up an impenetrable network.
Domanski said MyCloud users on OS 3 can virtually eliminate the threat from this attack by simply ensuring that the devices are not set up to be reachable remotely over the Internet. MyCloud devices make it super easy for customers to access their data remotely, but doing so also exposes them to attacks like last month’s that led to the mass-wipe of MyBook Live devices.
verify that the user’s VeraCrypt installation is not configured to encrypt keys and passwords stored in RAM. To check this option, open VeraCrypt Settings – Preferences – More settings – Performance/Driver configuration and check if the Activate encryption of keys and passwords stored in the RAM box is selected.
If this option is selected, EFDD will be unable to locate the encryption keys. Note that disabling this setting requires a reboot, so the point of this action is lost as the encrypted container will be locked/unmounted after the reboot.
What is RAM encryption?
According to Mounir IDRASSI, “RAM encryption mechanism serves two purposes: add a protection against cold boot attacks and add an obfuscation layer to make it much more difficult to recover encryption master keys from memory dumps , either live dumps or offline dumps (without it, locating and extracting master keys from memory dumps is relatively easy).” (We strongly recommend reading Mounir’s entire post as it contains important details on how this protection is implemented).
Known limitations
As you already know, breaking VeraCrypt is extremely complex. VeraCrypt presents one of the strongest encryption options we have encountered. Even a thousand computers or a network of powerful Amazon EC1 instances with top GPUs may spend years if not hundreds of years to break a strong password. Extracting and using OTFE keys remains one of the few usable method to break in to encrypted containers. Yet, this method has a number of limitations.
One of the most restricting limitations is the requirement to obtain physical access to the computer during the time a VeraCrypt disk is mounted: only in that case the encryption keys are available in RAM. That computer must not be locked, and the authenticated user session must have administrator’s privileges (you need them to obtain the memory dump). Finally, the memory encryption option in VeraCrypt must not be used. On the bright side, the choice of encryption and hashing algorithms does not matter, as well as the PIM number.
Five months before DarkSide attacked the Colonial pipeline, two researchers discovered a way to rescue its ransomware victims. Then an antivirus company’s announcement alerted the hackers. //
On January 11, antivirus company Bitdefender said it was “happy to announce” a startling breakthrough. It had found a flaw in the ransomware that a gang known as DarkSide was using to freeze computer networks of dozens of businesses in the US and Europe. Companies facing demands from DarkSide could download a free tool from Bitdefender and avoid paying millions of dollars in ransom to the hackers.
But Bitdefender wasn’t the first to identify this flaw. Two other researchers, Fabian Wosar and Michael Gillespie, had noticed it the month before and had begun discreetly looking for victims to help. By publicizing its tool, Bitdefender alerted DarkSide to the lapse, which involved reusing the same digital keys to lock and unlock multiple victims. The next day, DarkSide declared that it had repaired the problem, and that “new companies have nothing to hope for.”
“Special thanks to BitDefender for helping fix our issues,” DarkSide said. “This will make us even better.” //
It wasn’t the first time Bitdefender trumpeted a solution that Wosar or Gillespie had beaten it to. Gillespie had broken the code of a ransomware strain called GoGoogle, and was helping victims without any fanfare, when Bitdefender released a decryption tool in May 2020. Other companies have also announced breakthroughs publicly, Wosar and Gillespie said.
“People are desperate for a news mention, and big security companies don’t care about victims,” Wosar said.
the carriers could offer their own “number parking” service for customers who know they will not require phone service for an extended period of time, or for those who just aren’t sure what they want to do with a number. Such services are already offered by companies like NumberBarn and Park My Phone, and they generally cost between $2-5 per month.
The Princeton study recommends consumers who are considering a number change instead either store the digits at an existing number parking service, or “port” the number to something like Google Voice. For a one-time $20 fee, Google Voice will let you port the number, and then you can continue to receive texts and calls to that number via Google Voice, or you can forward them to another number.
Play stupid games...
May 14, 2021
Sounds like DarkSide learned what dictators and cybercriminals alike have known for decades:
Want to shut down international logistics and shipping? Ok. Kill people by shutting down hospitals? The FBI will get around to investigating it. Commit some war crimes here and there? Maybe a condemnation and some sanctions.
F*** with America’s oil? Get ready to learn about American liberty. And by liberty, I mean you’re going to liberated from everything you hold dear.
The cycle of painful updates begins anew
In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files. ///
No other significance... except to exploit Cellebrite if it is used against a phone with Signal installed? Maybe?
For example, by including a specially formatted but otherwise innocuous file in an app on a device that is then scanned by Cellebrite, it’s possible to execute code that modifies not just the Cellebrite report being created in that scan, but also all previous and future generated Cellebrite reports from all previously scanned devices and all future scanned devices in any arbitrary way (inserting or removing text, email, photos, contacts, files, or any other data), with no detectable timestamp changes or checksum failures. This could even be done at random, and would seriously call the data integrity of Cellebrite’s reports into question. //
“Looking at both UFED and Physical Analyzer, though, we were surprised to find that very little care seems to have been given to Cellebrite’s own software security,” Marlinspike wrote. “Industry-standard exploit mitigation defenses are missing, and many opportunities for exploitation are present.” //
Marlinspike said he obtained the Cellebrite gear in a “truly unbelievable coincidence” as he was walking and “saw a small package fall off a truck ahead of me.” The incident does seem truly unbelievable. Marlinspike declined to provide additional details about precisely how he came into possession of the Cellebrite tools.
The fell-of-a-truck line wasn't the only tongue-in-cheek statement in the post. Marlinspike also wrote:
In completely unrelated news, upcoming versions of Signal will be periodically fetching files to place in app storage. These files are never used for anything inside Signal and never interact with Signal software or data, but they look nice, and aesthetics are important in software. Files will only be returned for accounts that have been active installs for some time already, and only probabilistically in low percentages based on phone number sharding. We have a few different versions of files that we think are aesthetically pleasing, and will iterate through those slowly over time. There is no other significance to these files. //
“We are of course willing to responsibly disclose the specific vulnerabilities we know about to Cellebrite if they do the same for all the vulnerabilities they use in their physical extraction and other services to their respective vendors, now and in the future,” Marlinspike wrote.
In January, we learned about a Chinese espionage campaign that exploited four zero-days in Microsoft Exchange. One of the characteristics of the campaign, in the later days when the Chinese probably realized that the vulnerabilities would soon be fixed, was to install a web shell in compromised networks that would give them subsequent remote access. Even if the vulnerabilities were patched, the shell would remain until the network operators removed it.
Now, months later, many of those shells are still in place. And they’re being used by criminal hackers as well.
On Tuesday, the FBI announced that it successfully received a court order to remove “hundreds” of these web shells from networks in the US.
This is nothing short of extraordinary, and I can think of no real-world parallel. //
xcv • April 14, 2021 12:32 PM
@ O.P.
xcv But every courthouse in the United States is running on Microsoft’s legal-industry-specific software products. Lexis-Nexis databases, title deed and recording software, court filing software, etc. So some guy is going to end up in the federal penitentiary, and all the court records will be deleted, altered, or hacked on Microsoft software, and after a few years, nobody can even look up any records as to why the guy’s in prison, but they’re never going to let him out, because he’s been classified as a violent felon in the federal prison population.
It makes me wonder what they classify as “violent crime” or not, because pulling the trigger of a handgun with your finger is no more violent than striking a key on a computer keyboard with the same finger — and consequences no longer matter in court — because modern courts no longer require the third of three elements necessary to convict a crime since the time of the ancient Romans, namely
- mens rea;
- actus reus; &
- noxa rea.
The ancient Romans insisted that if (#1) it wasn’t something you intended to do, or (#2) it wasn’t something you really did, or (#3) you did not really harm anyone — then you didn’t commit a crime, and therefore you could not be convicted of a crime.
Modern courts on the other hand have repealed the classical third necessary element of conviction for crime, and omitted due process by either imposing punishment for harmless or victimless acts, or by falsely imputing harm (noxa) where none exists.
A security researcher is recommending against LastPass password manager after detailing seven trackers found in the Android app, The Register reports. Although there is no suggestion that the trackers, which were analyzed by researcher Mike Kuketz, are transferring a user’s actual passwords or usernames, Kuketz says their presence is bad practice for a security-critical app handling such sensitive information.
Don't trust software you haven't audited.
If you must trust software you haven't audited, then choose to trust code that's exposed to many developers who independently are likely to speak up about a vulnerability.
Open source isn't inherently more secure than proprietary software, but the systems in place to fix it are far better planned, implemented, and staffed.
Stories about computer security tend to go viral when they bridge the vast divide between geeks and luddites, and this week’s news about a hacker who tried to poison a Florida town’s water supply was understandably front-page material. But for security nerds who’ve been warning about this sort of thing for ages, the most surprising aspect of the incident seems to be that we learned about it at all.
On February 5, 2021, unidentified cyber actors obtained unauthorized access to the supervisory control and data acquisition (SCADA) system at a U.S. drinking water treatment plant. The unidentified actors used the SCADA system’s software to increase the amount of sodium hydroxide, also known as lye, a caustic chemical, as part of the water treatment process. Water treatment plant personnel immediately noticed the change in dosing amounts and corrected the issue before the SCADA system’s software detected the manipulation and alarmed due to the unauthorized change. As a result, the water treatment process remained unaffected and continued to operate as normal. The cyber actors likely accessed the system by exploiting cybersecurity weaknesses, including poor password security, and an outdated operating system. Early information indicates it is possible that a desktop sharing software, such as TeamViewer, may have been used to gain unauthorized access to the system. Onsite response to the incident included Pinellas County Sheriff Office (PCSO), U.S. Secret Service (USSS), and the Federal Bureau of Investigation (FBI).
The FBI, the Cybersecurity and Infrastructure Security Agency (CISA), the Environmental Protection Agency (EPA), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) have observed cyber criminals targeting and exploiting desktop sharing software and computer networks running operating systems with end of life status to gain unauthorized access to systems. Desktop sharing software, which has multiple legitimate uses—such as enabling telework, remote technical support, and file transfers—can also be exploited through malicious actors’ use of social engineering tactics and other illicit measures. Windows 7 will become more susceptible to exploitation due to lack of security updates and the discovery of new vulnerabilities. Microsoft and other industry professionals strongly recommend upgrading computer systems to an actively supported operating system.
In 2016, Juniper removed the backdoored Dual_EC DRBG algorithm, impacting its ScreenOS operating system. NIST also withdrew the algorithm, citing security concern.
Juniper’s use of Dual_EC dates to 2008, at least a year after Dan Shumow and Neils Ferguson’s landmark presentation at the CRYPTO conference, which first cast suspicion on Dual_EC being backdoored by the NSA.
To many, Juniper’s move to remove Dual_EC (and also ANSI X9.31 PRNG) confirmed the widely held belief the vulnerabilities were tied to operations by the NSA described in the 2013 article published by the German publication Der Spiegel. That article described the existence of a catalog of hardware and software tools used by the NSA to infiltrate equipment manufactured by Juniper, Cisco and Huawei. The story was based on leaked 2013 document by former contractor Edward Snowden.
Calls for encryption backdoors date back to the 1990s and the so-called Crypto Wars. That’s when President Bill Clinton’s administration insisted that U.S. government have a way to break the encryption that was exported outside of the United States.
Employees worry that, should Signal fail to build policies and enforcement mechanisms to identify and remove bad actors, the fallout could bring more negative attention to encryption technologies from regulators at a time when their existence is threatened around the world. //
“The world needs products like Signal — but they also need Signal to be thoughtful,” said Gregg Bernstein, a former user researcher who left the organization this month over his concerns. “It’s not only that Signal doesn’t have these policies in place. But they’ve been resistant to even considering what a policy might look like.” //
For years, the company has faced complaints that its requirement that people use real phone numbers to create accounts raises privacy and security concerns. And so Signal has begun working on an alternative: letting people create unique usernames. But usernames (and display names, should the company add those, too) could enable people to impersonate others — a scenario the company has not developed a plan to address, despite completing much of the engineering work necessary for the project to launch. //
Marlinspike said, it was important to him that Signal not become neutered in the pursuit of a false neutrality between good and bad actors. Marginalized groups depend on secure private messaging to safely conduct everything from basic day-to-day communication to organized activism, he told me. Signal exists to improve that experience and make it accessible to more people, even if bad actors might also find it useful.
“I want us as an organization to be really careful about doing things that make Signal less effective for those sort of bad actors if it would also make Signal less effective for the types of actors that we want to support and encourage,” he said. “Because I think that the latter have an outsized risk profile. There’s an asymmetry there, where it could end up affecting them more dramatically.”