5331 private links
At issue is a component of the digital text encoding standard Unicode, which allows computers to exchange information regardless of the language used. Unicode currently defines more than 143,000 characters across 154 different language scripts (in addition to many non-script character sets, such as emojis).
Specifically, the weakness involves Unicode’s bi-directional or “Bidi” algorithm, which handles displaying text that includes mixed scripts with different display orders, such as Arabic — which is read right to left — and English (left to right).
But computer systems need to have a deterministic way of resolving conflicting directionality in text. Enter the “Bidi override,” which can be used to make left-to-right text read right-to-left, and vice versa.
“In some scenarios, the default ordering set by the Bidi Algorithm may not be sufficient,” the Cambridge researchers wrote. “For these cases, Bidi override control characters enable switching the display ordering of groups of characters.”
Bidi overrides enable even single-script characters to be displayed in an order different from their logical encoding. As the researchers point out, this fact has previously been exploited to disguise the file extensions of malware disseminated via email.
Here’s the problem: Most programming languages let you put these Bidi overrides in comments and strings. This is bad because most programming languages allow comments within which all text — including control characters — is ignored by compilers and interpreters. Also, it’s bad because most programming languages allow string literals that may contain arbitrary characters, including control characters. //
“Therefore, by placing Bidi override characters exclusively within comments and strings, we can smuggle them into source code in a manner that most compilers will accept. Our key insight is that we can reorder source code characters in such a way that the resulting display order also represents syntactically valid source code.”
“Bringing all this together, we arrive at a novel supply-chain attack on source code. By injecting Unicode Bidi override characters into comments and strings, an adversary can produce syntactically-valid source code in most modern languages for which the display order of characters presents logic that diverges from the real logic. In effect, we anagram program A into program B.”
Unfortunately, the 0.74 stable PuTTY release does not safely guard plain-text passwords provided to it via the -pw command line option for the psftp, pscp, and plink utilities as the documentation clearly warns. There is evidence within the source code that the authors are aware of the problem, but the exposure is confirmed on Microsoft Windows, Oracle Linux, and the package prepared by the OpenBSD project.
After discussions with the original author of PuTTY, Simon Tatham developed a new -pwfile option, which will read an SSH password from a file, removing it from the command line. This feature can be backported into the current 0.76 stable release. Full instructions for applying the backport and a .netrc wrapper for psftp are presented, also implemented in Windows under Busybox.
While the -pw option is attractive for SSH users who are required to use passwords (and forbidden from using keys) for scripting activities, the exposure risk should be understood for any use of the feature. Users with security concerns should obtain the -pwfile functionality, either by applying a patch to the 0.76 stable release, or using a snapshot release found on the PuTTY website.
Fourteen of the world's leading computer security and cryptography experts have released a paper arguing against the use of client-side scanning because it creates security and privacy risks.
Client-side scanning (CSS, not to be confused with Cascading Style Sheets) involves analyzing data on a mobile device or personal computer prior to the application of encryption for secure network transit or remote storage. CSS in theory provides a way to look for unlawful content while also allowing data to be protected off-device.
Apple in August proposed a CSS system by which it would analyze photos destined for iCloud backup on customers' devices to look for child sexual abuse material (CSAM), only to backtrack in the face of objections from the security community and many advocacy organizations.
The paper [PDF], "Bugs in our Pockets: The Risks of Client-Side Scanning," elaborates on the concerns raised immediately following Apple's CSAM scanning announcement with an extensive analysis of the technology.
Penned by some of the most prominent computer science and cryptography professionals – Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso – the paper contends that CSS represents bulk surveillance that threatens free speech, democracy, security, and privacy.
"In this report, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance," the paper says.
"Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused." //
But the paper notes that this approach depends on Apple being willing and able to enforce its policy, which might not survive insistence by nations that they can dictate policy within their borders.
"Apple has yielded to such pressures in the past, such as by moving the iCloud data of its Chinese users to three data centers under the control of a Chinese state-owned company, and by removing the 'Navalny' voting app from its Russian app store," the paper says.
And even if Apple were to show unprecedented spine by standing up to authorities demanding CSS access, nations like Russia and Belarus could collude, each submitting a list of supposed child-safety image identifiers that in fact point to political content, the paper posits.
"In summary, Apple has devoted a major engineering effort and employed top technical talent in an attempt to build a safe and secure CSS system, but it has still not produced a secure and trustworthy design," the paper says. //
CSS, the paper says, entails privacy risks in the form of "upgrades" that expand what content can be scanned and adversarial misuse.
And it poses security risks, such as deliberate efforts to get people reported by the system and software vulnerabilities. The authors conclude that CSS systems cannot be trustworthy or secure because of the way they're designed.
"The proposal to preemptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access," the paper says.
"Instead of having targeted capabilities such as to wiretap communications with a warrant and to perform forensics on seized devices, the agencies’ direction of travel is the bulk scanning of everyone’s private data, all the time, without warrant or suspicion. That crosses a red line. Is it prudent to deploy extremely powerful surveillance technology that could easily be extended to undermine basic freedoms?"
One thing that ultimately allowed the impact of this event to be greatly reduced is that Android devices do not check the expiration date of the Root CA Certificate when establishing trust in the certificate chain... This means that it is still possible to anchor on the expired IdenTrust Root CA, and those Android devices would work, //
The 'New Default Chain' still ultimately anchors on the IdenTrust Root CA, meaning compatibility with Android devices that won't check the expiration date, but it passes through the new Let's Encrypt ISRG Root X1 CA which more modern clients will have in their trust store. This means those modern clients will stop the chain there and accept it as trusted too. Win-win. //
One interesting side effect of this though is that you can then modify the Root Certificates in the trust store and because no signature validation is taking place, meaning no integrity check is taking place, the modified Root Certificate will be treated as perfectly valid. This is the part that I'd never thought about. //
When first thinking about this issue, I felt a little bit like I did when I first found out, all those years ago, about signatures not being validated on roots and how that seemed ridiculous. Once you think about this more though, you realise that it does make sense and that it just doesn't seem logical upon first thought. In order to make this change and it actually have an effect on the client, you need to have access to that client with administrative privileges to change the root store. If your concern is that an attacker might do something like this, well, I'd suggest you have far bigger issues to concern yourself with given that the attacker has admin/root and is on your device!
From the tech reporting sector, Google is taking blow after blow after blow. There are massive security vulnerabilities in what they offer you and I every day, and those privacy issues are going to start becoming enough of an issue to make the government look a little more closely at them.
So how do they keep the government from looking at them? They announce something that they know enough people in government will like and take credit for their “brave” stance and business policy. //
The latest tracking nightmare for Chrome users comes in two parts. First, Google has ignored security warnings and launched a new Chrome API to detect and report when you’re “idle,” i.e., not actively using your device. Apple warns “this is an obvious privacy concern,” and Mozilla that it’s “too tempting an opportunity for surveillance.” //
they are also banking on Congress and its ongoing fascination with trying to regulate Facebook that they can keep a low profile on all this privacy stuff and not have to worry about a Congressional investigation.
So that’s why they are choosing right now to go after so-called “climate deniers.” They are shifting the focus away from them at a time when it’s very easy to distract their users and Congress. But Google is going to find itself in increasing trouble before too long, and Congress had better start looking deeper into these security issues because Google is going to cause an insane amount of identity theft before too long.
Syniverse, a company that routes hundreds of billions of text messages every year for hundreds of carriers including Verizon, T-Mobile, and AT&T, revealed to government regulators that a hacker gained unauthorized access to its databases for five years. Syniverse and carriers have not said whether the hacker had access to customers' text messages. //
Syniverse says its intercarrier messaging service processes over 740 billion messages each year for over 300 mobile operators worldwide. Though Syniverse likely isn't a familiar name to most cell phone users, the company plays a key role in ensuring that text messages get to their destination.
Do you know how to spot a scam?
Every year, thousands of people like you lose money to phishing. If a hacker were pretending to be your bank, would you be able to tell?
A security auditor for our servers has demanded the following within two weeks:
- A list of current usernames and plain-text passwords for all user accounts on all servers
- A list of all password changes for the past six months, again in plain-text
- A list of "every file added to the server from remote devices" in the past six months
- The public and private keys of any SSH keys
- An email sent to him every time a user changes their password, containing the plain text password
We're running Red Hat Linux 5/6 and CentOS 5 boxes with LDAP authentication.
As far as I'm aware, everything on that list is either impossible or incredibly difficult to get, but if I don't provide this information we face losing access to our payments platform and losing income during a transition period as we move to a new service. Any suggestions for how I can solve or fake this information? //
ask him directly how to execute his requirements -- admit you don't know how, and would like to leverage his experience. Once you're out and gone, a response to his "I have over 10 years experience in security auditing" would be "no, you have 5 minutes of experience repeated hundreds of times".
https://www.rsync.net/resources/regulatory/pci.html
The rsync.net platform is so simple that our first PCI scan vendor, in 2006, could not actually verify that we were up and running.
We offered so little attack surface for their scans that they (incorrectly) assumed we were offline.
Our platform only answers on port 22 with OpenSSH.
That's it.
I do security and I title this "Most secured platform in the world."
stavros 6 months ago [–]
Oh yeah? Well I run one where no ports are open. In fact, I haven't even connected it to the network.
xarope 6 months ago [–]
Wasn't that the joke about how the original windows NT server got it's C2/ITSEC rating?
stavros 6 months ago [–]
I haven't heard that joke!
batch12 6 months ago [–]
I'll raise you-- I have one that I keep powered off...
stavros 6 months ago [–]
I can top that: I don't have one.
krylon 6 months ago [–]
That's nothing. I don't have thousands.
rubiquity 6 months ago [–]
What are you two even talking about?
paulmd 6 months ago [–]
The three golden rules of computer security: do not own a computer, do not power it on, and do not use it.
Microsoft announced yesterday that Windows 11 will require TPM (Trusted Platform Module) chips on existing and new devices. It’s a significant hardware change that has been years in the making, but Microsoft’s messy way of communicating this has left many confused about whether their hardware is compatible. What is a TPM, and why do you need one for Windows 11 anyway?
“The Trusted Platform Modules (TPM) is a chip that is either integrated into your PC’s motherboard or added separately into the CPU,” explains David Weston, director of enterprise and OS security at Microsoft. “Its purpose is to protect encryption keys, user credentials, and other sensitive data behind a hardware barrier so that malware and attackers can’t access or tamper with that data.”
On 30th September 2021, the root certificate that Let's Encrypt are currently using, the IdentTrust DST Root CA X3 certificate, will expire. You may or may not need to do anything about this Root CA expiring, but I'm betting a few things will probably break on that day so here's what you need to know!
There are hundreds of publicly trusted Certificate Authorities and a subset of those implement a specification for certificate request/renewal called ACME (Automatic Certificate Management Environment) https://datatracker.ietf.org/doc/html/rfc8555 (ACME v2). Anyone can create (and use) a new certificate authority but only recognised CAs which can prove they follow strict issuance guidelines become generally trusted. You can, for instance, create your own ACME certificate authority and trust it within your organisation, but it won't be trusted by computers outside your organisation.
As more public certificate authorities hop on the ACME bandwagon, it is important to understand the details and limitations of their implementations. This page will attempt to keep track of that data.
Summary
“When security risks in web services are discovered by independent security researchers who understand the severity of the risk, they often lack the channels to disclose them properly. As a result, security issues may be left unreported. security.txt defines a standard to help organizations define the process for security researchers to disclose security vulnerabilities securely.” //
What is the main purpose of security.txt?
The main purpose of security.txt is to help make things easier for companies and security researchers when trying to secure platforms. Thanks to security.txt, security researchers can easily get in touch with companies about security issues.
Where should I put the security.txt file?
For websites, the security.txt file should be placed under the /.well-known/ path (/.well-known/security.txt) [RFC8615]. It can also be placed in the root directory (/security.txt) of a website, especially if the /.well-known/ directory cannot be used for technical reasons, or simply as a fallback. The file can be placed in both locations of a website at the same time.
Yesterday, independent newsroom ProPublica published a detailed piece examining the popular WhatsApp messaging platform's privacy claims. The service famously offers "end-to-end encryption," which most users interpret as meaning that Facebook, WhatsApp's owner since 2014, can neither read messages itself nor forward them to law enforcement.
This claim is contradicted by the simple fact that Facebook employs about 1,000 WhatsApp moderators whose entire job is—you guessed it—reviewing WhatsApp messages that have been flagged as "improper." //
The loophole in WhatsApp's end-to-end encryption is simple: The recipient of any WhatsApp message can flag it. Once flagged, the message is copied on the recipient's device and sent as a separate message to Facebook for review.
Messages are typically flagged—and reviewed—for the same reasons they would be on Facebook itself, including claims of fraud, spam, child porn, and other illegal activities. When a message recipient flags a WhatsApp message for review, that message is batched with the four most recent prior messages in that thread and then sent on to WhatsApp's review system as attachments to a ticket. //
Although nothing indicates that Facebook currently collects user messages without manual intervention by the recipient, it's worth pointing out that there is no technical reason it could not do so. The security of "end-to-end" encryption depends on the endpoints themselves—and in the case of a mobile messaging application, that includes the application and its users.
An "end-to-end" encrypted messaging platform could choose to, for example, perform automated AI-based content scanning of all messages on a device, then forward automatically flagged messages to the platform's cloud for further action. Ultimately, privacy-focused users must rely on policies and platform trust as heavily as they do on technological bullet points. //
Although WhatsApp's "end-to-end" encryption of message contents can only be subverted by the sender or recipient devices themselves, a wealth of metadata associated with those messages is visible to Facebook—and to law enforcement authorities or others that Facebook decides to share it with—with no such caveat.
ProPublica found more than a dozen instances of the Department of Justice seeking WhatsApp metadata since 2017. These requests are known as "pen register orders," terminology dating from requests for connection metadata on landline telephone accounts. ProPublica correctly points out that this is an unknown fraction of the total requests in that time period, as many such orders, and their results, are sealed by the courts.
There are quite a few Internet services today advertising absolute privacy from both hackers and government agencies, such as Signal for messaging. After Lavabit’s controversial shutdown, ProtonMail rose up to become the advertised email service of choice for privacy-minded users, especially those with secrets to keep. Sometimes, however, those secrets may run afoul of certain countries’ laws, which often leads to email service providers handing over data to identify users under investigation. While ProtonMail advertised privacy and security against such actions, it was apparently forced to cave in to such legal demands, leading to the arrest of climate change activists in France. //
Etienne - Tek
@tenacioustek
So @ProtonMail received a legal request from Europol through Swiss authorities to provide information about Youth for Climate action in Paris, they provided the IP address and information on the type of device used to the police //
Of course, ProtonMail was legally forced to hand over that data, but it didn’t get away without incurring the wrath of the Web. It was questioned why it possessed users’ IP addresses in the first place when it advertises that it doesn’t log IP addresses by default. ProtonMail founder and CEO Andy Yen explains that it only started logging the specific users’ IP addresses after it was legally forced to do so by Swiss authorities.
The US Air Force's first ever chief software officer has quit the job after branding it "probably the most challenging and infuriating of my entire career" in a remarkably candid blog post.
Nicolas Chaillan's impressively blunt leaving note, which he posted to his LinkedIn profile, castigated USAF senior hierarchy for failing to prioritise basic IT issues, saying: "A lack of response and alignment is certainly a contributor to my accelerated exit."
Chaillan took on his chief software officer role in May 2019, having previously worked at the US Department of Defense rolling out DevSecOps practices to the American military. Before that he founded two companies.
In his missive, Chaillan also singled out a part of military culture that features in both the US and the UK: the practice of appointing mid-ranking generalist officers to run specialist projects.
"Please," he implored, "stop putting a Major or Lt Col (despite their devotion, exceptional attitude, and culture) in charge of ICAM, Zero Trust or Cloud for 1 to 4 million users when they have no previous experience in that field – we are setting up critical infrastructure to fail."
Jesse Kelly
@JesseKellyDC
Tried to warn everyone drastic measures should have been taken back when that pipeline got hacked. We paid a ransom and the administration basically said, “not much problem”.
We just DRIP weakness under Biden. They’re all coming for us now.
Jacqui Heinrich
@JacquiHeinrich
🚨BREAKING, thread:
The State Department has been hit by a cyber attack, and notifications of a possible serious breach were made by the Department of Defense Cyber Command.
7:52 PM · Aug 21, 2021
Sebastian Gorka DrG
@SebGorka
“America is Back”
To remain protected online, you should check whether your initial line of defense is secure. First, check your password to see if it’s compromised. There are a number of security programs that will let you do this. And make sure you’re using a well-crafted password.
We also recommend you limit the use of SMS as a 2FA method if you can. You can instead use app-based one-time codes, such as through Google Authenticator. In this case, the code is generated within the Google Authenticator app on your device itself, rather than being sent to you.
However, this approach can also be compromised by hackers using some sophisticated malware. A better alternative would be to use dedicated hardware devices such as YubiKey.
These are small USB (or near-field communication-enabled) devices that provide a streamlined way to enable 2FA across different services.
Such physical devices need to be plugged into or brought into close proximity of a login device as a part of 2FA, therefore mitigating the risks associated with visible one-time codes, such as codes sent by SMS.
Grant found the issue, which has been present for at least 12 years, in Buffalo routers, specifically the Arcadyan-based web interface software.
Bug hunting
In a blog post, the researcher explained that one of the first things he looks at while analyzing any web application or interface is how it handles authentication.
Grant found that the feature bypass_check() was only checking as many bytes as are in bypass_list strings.
Grant wrote: “This means that if a user is trying to reach http://router/images/someimage.png, the comparison will match since /images/ is in the bypass list, and the URL we are trying to reach begins with /images/.
“The bypass_check() function doesn’t care about strings which come after, such as ‘someimage.png’.
“So what if we try to reach /images/../<somepagehere>? For example, let’s try /images/..%2finfo.html. The /info.html URL normally contains all of the nice LAN/WAN info when we first login to the device, but returns any unauthenticated users to the login screen.”