5331 private links
Flaw affecting selected sudo versions is easy for unprivileged users to exploit. //
The sudo version history shows that the vulnerability was introduced in 2009 and remained active until 2018, with the release of 1.8.26b1. Systems or software using a vulnerable version should move to version 1.8.31 as soon as practical. Those who can’t update right away can prevent exploits by making sure pwfeedback is disabled. To check its status, run:
sudo -l
If pwfeedback is listed in the outputted “Matching Defaults entries,” the sudoers configuration is vulnerable on affected sudo versions. The following is an example of output that indicates a vulnerable sudo configuration:
$ sudo -l
Matching Defaults entries for millert on linux-build:
insults, pwfeedback,
Disabling pwfeedback involves using the visudo command to edit the sudoers file and adding an exclamation point so that
Defaults pwfeedback
Becomes:
Defaults !pwfeedback
for maximum security on your domains, consider adopting some or all of the following best practices:
-Use registration features like Registry Lock that can help protect domain name records from being changed. Note that this may increase the amount of time it takes going forward to make key changes to the locked domain (such as DNS changes).
-Use DNSSEC (both signing zones and validating responses).
-Use access control lists for applications, Internet traffic and monitoring.
-Use 2-factor authentication, and require it to be used by all relevant users and subcontractors.
-In cases where passwords are used, pick unique passwords and consider password managers.
-Review the security of existing accounts with registrars and other providers, and make sure you have multiple notifications in place when and if a domain you own is about to expire.
-Monitor the issuance of new SSL certificates for your domains by monitoring, for example, Certificate Transparency Logs.
Cory Doctorow’s sunglasses are seemingly ordinary. But they are far from it when seen on security footage, where his face is transformed into a glowing white orb.
At his local credit union, bemused tellers spot the curious sight on nearby monitors and sometimes ask, “What’s going on with your head?” said Doctorow, chuckling.
The frames of his sunglasses, from Chicago-based eyewear line Reflectacles, are made of a material that reflects the infrared light found in surveillance cameras and represents a fringe movement of privacy advocates experimenting with clothes, ornate makeup and accessories as a defense against some surveillance technologies. //
The motivation to seek out antidotes to an over-powerful force has political and symbolic significance for Doctorow, an L.A.-based science-fiction author and privacy advocate. His father’s family fled the Soviet Union, which used surveillance to control the masses.
“We are entirely too sanguine about the idea that surveillance technologies will be built by people we agree with for goals we are happy to support,” he said. “For this technology to be developed and for there to be no countermeasures is a road map to tyranny.” //
The lenses of normal sunglasses become clear under any form of infrared light, but the special wavelength absorbers baked into Urban’s glasses soak up the light and turn them black.
Reflectacles’ absorbent quality makes them effective at blocking Face ID on the newest iPhones. While Urban said the glasses aren’t designed to evade facial recognition that doesn’t use infrared light, they will lessen the chance of a positive match in such systems. //
L.A.-based cybersecurity analyst Kate Rose created her own fashion line called Adversarial Fashion to obfuscate automatic license-plate readers. A clothes maker on the side, she imprinted stock images of out-of-use and fake license plates onto fabric to create shirts and dresses. When the wearers walk past the AI systems at traffic stops, the machines read the images on the clothes as plates, in turn feeding junk data into the technology.
vas pup • January 20, 2020 5:07 PM
From the article - looks like the weakest link:
"Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested."
Photos from government databases are uploaded to private servers with untested security. Just speechless.
The New York Times has a long story about Clearview AI, a small company that scrapes identified photos of people from pretty much everywhere, and then uses unstated magical AI technology to identify people in other photos.
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system -- whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites -- goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
[...]
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it's not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.
Occasionally even those you usually disagree with are right about something. Marvel at the moment.
Attack demoed less than 24 hours after disclosure of bug-breaking certificate validation.
Yesterday's Microsoft Windows patches included a fix for a critical vulnerability in the system's crypto library.
A spoofing vulnerability exists in the way Windows CryptoAPI (Crypt32.dll) validates Elliptic Curve Cryptography (ECC) certificates.
An attacker could exploit the vulnerability by using a spoofed code-signing certificate to sign a malicious executable, making it appear the file was from a trusted, legitimate source. The user would have no way of knowing the file was malicious, because the digital signature would appear to be from a trusted provider.
A successful exploit could also allow the attacker to conduct man-in-the-middle attacks and decrypt confidential information on user connections to the affected software.
That's really bad, and you should all patch your system right now, before you finish reading this blog post.
This is a zero-day vulnerability, meaning that it was not detected in the wild before the patch was released. It was discovered by security researchers. Interestingly, it was discovered by NSA security researchers, and the NSA security advisory gives a lot more information about it than the Microsoft advisory does.
Exploitation of the vulnerability allows attackers to defeat trusted network connections and deliver executable code while appearing as legitimately trusted entities. Examples where validation of trust may be impacted include:
- HTTPS connections
- Signed files and emails
- Signed executable code launched as user-mode processes
Early yesterday morning, NSA's Cybersecurity Directorate head Anne Neuberger hosted a media call where she talked about the vulnerability and -- to my shock -- took questions from the attendees. According to her, the NSA discovered this vulnerability as part of its security research. (If it found it in some other nation's cyberweapons stash -- my personal favorite theory -- she declined to say.) She did not answer when asked how long ago the NSA discovered the vulnerability. She said that this is not the first time it sent the Microsoft a vulnerability to fix, but it was the first time it has publicly taken credit for the discovery. The reason is that it is trying to rebuild trust with the security community, and this disclosure is a result of its new initiative to share findings more quickly and more often.
Barring any other information, I would take the NSA at its word here. So, good for it.
None of us who favor strong encryption is saying that child exploitation isn't a serious crime, or a worldwide problem. We're not saying that about kidnapping, international drug cartels, money laundering, or terrorism. We are saying three things. One, that strong encryption is necessary for personal and national security. Two, that weakening encryption does more harm than good. And three, law enforcement has other avenues for criminal investigation than eavesdropping on communications and stored devices. This is one example, where people unraveled a dark-web website and arrested hundreds by analyzing Bitcoin transactions. This is another, where policy arrested members of a WhatsApp group.
So let's have reasoned policy debates about encryption -- debates that are informed by technology. And let's stop it with the scare stories.
EDITED TO ADD (12/13): The DoD just said that strong encryption is essential for national security.
All DoD issued unclassified mobile devices are required to be password protected using strong passwords. The Department also requires that data-in-transit, on DoD issued mobile devices, be encrypted (e.g. VPN) to protect DoD information and resources. The importance of strong encryption and VPNs for our mobile workforce is imperative. Last October, the Department outlined its layered cybersecurity approach to protect DoD information and resources, including service men and women, when using mobile communications capabilities.
[...]
As the use of mobile devices continues to expand, it is imperative that innovative security techniques, such as advanced encryption algorithms, are constantly maintained and improved to protect DoD information and resources. The Department believes maintaining a domestic climate for state of the art security and encryption is critical to the protection of our national security.
By default, Chrome will now let users know if their credentials are public.
Clive Robinson • October 14, 2019 5:20 AM
@ ,
With regards the wired article you will find,
As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it's very possible, in fact, that governments around the world may have already thought of their analog attack method.
Only it's not "governments" it was people on this blog quite some years back. Have a search for @RobertT and "capacitance" he described some much cleverer varients, with @Nick P and myself.
But also you will find in the article,
"Detecting this with current techniques would be very, very challenging if not impossible," says Todd Austin, one of the computer science professors at the University of Michigan who led the research. "It's a needle in a mountain-sized haystack." Or as Google engineer Yonatan Zunger wrote after reading the paper: "This is the most demonically clever computer security attack I've seen in years."
Actually it's not that clever when you think aboit it, any student who has ever played with an NE555 timer as a retrigerable monostable used in many circuits will have used a capacitor as an integrator to triger a level change in a logic circuit. It's the repurposing of an old idea in a new way that makes them think "It's bleeding obvious... Why didn't I think of that" it's a sign that the idea has come of age in a broader market place.
But Todd Austin is wrong about detecting it, it is actually quite easy to spot, and I've said as much and described in some detail how to do it on this blog and other places some years ago now...
The first thing to keep in mind is that in the French language the same word means both safety and security. Thus the French way of thinking does not distinguish the ideas into unrelated domains as much as do those in the English language way of thinking[1].
The big problem with computer security is we "build pyramids not boats". Our thinking is skewed to believe that you can only build on secure foundations. It's not true, boats for millennia have got along fine without any foundations, and the water they float on is in no way stable or secure. A moder side view of this was Elon Musk and his landing barge for rockets, atleast in his case he could point at aircraft cariers to show he was not mad.
What if we decide not to have our compiter design process be one of Castles on bed rock, but warships on water? The English Tudor king Henry VIII found he could build a navey and thus set England on a course to become the worlds formost maritime nation and build an empire that covered the globe.
That is there are great possabilities in thinking mobile castles. Leonardo De Vinchi, drew up designs for such things, but his idea did not realy become part of military thinking during WWI with the invention of the armoured car that became the tank. Which again opened up significant possabilities and changed the face of land based warfare for ever.
Ask your self are there ways we could use a mechanism thought as for safety to one we can use for security?
The answer is look in the area of reliability. Unreliable systems are either "not dependable" or "dependable for a limited time". New York Telephone realised that if you could monitor an unreliable system and detect when it was going wrong and switch it out rapidly for a working system then you could keep a circuit in operation whilst you replaced the defective component. Thus the idea of fault tolerant systems began to be used.
The problem was detecting when a unit was starting to fail, eventually this gave rise to the idea of "voting systems" which NASA did not invent but certainly made famous.
Some years ago now I realised that redundant fault tolerant systems were infact "boats" from the security aspect, and that "fault" also covered malware. That is an idea for Safety works just as well for Security, to which some might rightly say "but of course, why would you think not".
It became a small but essential part of my "Castles-v-Prisons" idea which you can search for on this blog to find conversations about it.
The problem thus has a known solution...
Thus the question now is who takes on the Sisyphean task of pushing the idea over the group think mental entropy hump?
As I've noted over the years a great many ideas on this blog are discussed and solutions possed several years prior to both industry and academia even realising they should be looking at them, as for Governments, you hear that squeaky noise way way behind, that's the wheel they are to busy greasing with pork fat rather than replacing. Because they are still doing things the way their Grandpapy did, because in their conservative view "What was good enough for Grandpa, is good enough for me" (mind you Grandpapy was pretty quick at grabbing brown envelops behind his back ;-)
[1] A point I've made before, is that our primary language we learn when very young befor we are two forms the way we think. There is evidence of this with "tone deafness" and language where languages such as some Asian ones that depend on pitch to convey infomation. Speakers of such languages are considerably more likely to be "pitch perfect" across the population. It's why I think the fact that the number of native languages decreasing is actually harming the world by reducing the number of different ways people see and think about the world.
Alyer Babtu • October 13, 2019 9:49 PM
Not as elegant as
https://www.wired.com/2016/06/demonically-clever-backdoor-hides-inside-computer-chip/
http://static1.1.sqspcdn.com/static/f/543048/26931843/1464016046717/A2_SP_2016.pdf
but still, fit for purpose.
Ron Wyden of Oregon, Chris Van Hollen of Maryland, Chris Coons of Delaware, Gary Peters of Michigan, and Ed Markey sent a followup letter to Amazon emphasizing that they are worried about the protection of Ring users’ data and are “concerned about media reports suggesting a lack of respect of the privacy of Ring customers.” Specifically, the letter details concerns that Ring employees in Ukraine were given “virtually unfettered access” to every piece of footage taken from every Ring camera around the world.
Ring’s business model has been a particularly challenging privacy concern to address. Because Ring sells itself as a personal home security system, but has the ability to function like a large-scale CCTV network accessible to police, Amazon can fall back on the idea that consumers chose to build this surveillance network, one porch at a time. When EFF brought its concerns about police partnerships and the harms of ubiquitous surveillance directly to Ring, our concerns were dismissed.
Smith’s art is a stunning achievement, featuring layers and layers of intricate code that must have taken untold hours to lay out and piece together by hand. But there’s a catch to this kind of art creation — and because we live in a word full of choice when it comes to computers, it’s a big one. //
Browser variance turns Francine into modern art — and nightmare fuel. //
As Smith was quick to note in her GitHub repository for the piece, this illustration was designed specifically with Chrome tools, meaning it was made to be viewed in the Chrome browser. As Vox engineer David Zhou soon learned, trying to view it with other browsers — in this case, an older iteration of Opera — produced, er, a slightly different image.
Fortunately for Francine, most internet users are currently viewing her on newer versions of Chrome, as she was intended to be seen. But if you’re one of the roughly 42 percent of users out there who are still clinging to an outdated version of your browser of choice, let this be a lesson to you: software updates don’t just keep you safe from viruses, malware, and the ridicule of your peers. They can, quite literally, change your perspective.
Pure CSS Font
CSS/HTML Generator
For private, SEO-hidden, CAPTCHA-friendly unselectable text.
Deter plagiarism and spambots!
Hospitals that have been hit by a data breach or ransomware attack can expect to see an increase in the death rate among heart patients in the following months or years because of cybersecurity remediation efforts, a new study posits. Health industry experts say the findings should prompt a larger review of how security — or the lack thereof — may be impacting patient outcomes.
Tom's Guide writes about home brew TEMPEST receivers:
Today, dirt-cheap technology and free software make it possible for ordinary citizens to run their own Tempest programs and listen to what their own -- and their neighbors' -- electronic devices are doing.
Elliott, a researcher at Boston-based security company Veracode, showed that an inexpensive USB dongle TV tuner costing about $10 can pick up a broad range of signals, which can be "tuned" and interpreted by software-defined radio (SDR) applications running on a laptop computer.
Defenestrar Ars Scholae Palatinae et Subscriptor
reply
Oct 18, 2019 2:51 PM
Cybersecurity meant that nobody would accidentally put an 8" floppy in a pocket and drop it in the grocery store parking lot.
Five years ago, a CBS 60 Minutes report publicized a bit of technology trivia many in the defense community were aware of: the fact that eight-inch floppy disks were still used to store data critical to operating the Air Force's intercontinental ballistic missile command, control, and communications network. The system, once called the Strategic Air Command Digital Network (SACDIN), relied on IBM Series/1 computers installed by the Air Force at Minuteman II missile sites in the 1960s and 1970s.
Those floppy disks have now been retired. Despite the contention by the Air Force at the time of the 60 Minutes report that the archaic hardware offered a cybersecurity advantage, the service has completed an upgrade to what is now known as the Strategic Automated Command and Control System (SACCS), as Defense News reports. SAACS is an upgrade that swaps the floppy disk system for what Lt. Col. Jason Rossi, commander of the Air Force’s 595th Strategic Communications Squadron, described as a “highly secure solid state digital storage solution.” The floppy drives were fully retired in June.
But the IBM Series/1 computers remain, in part because of their reliability and security. //
While SACCS is reliable, it is obviously expensive and difficult to maintain when it fails. There are no replacement parts available, so all components must be repaired—a task that may require hours manipulating parts under a microscope. Civilian Air Force employees with years of experience in electronics repairs handle the majority of the work. But the code that runs the system is still written by enlisted Air Force programmers.