5333 private links
Veilid (pronounced Vay-Lid, from 'Valid and Veiled Identification')
Veilid allows anyone to build a distributed, private app. Veilid gives users the privacy to opt out of data collection and online tracking. Veilid is being built with user experience, privacy, and safety as our top priorities. It is open source and available to everyone to use and build upon.
Veilid goes above and beyond existing privacy technologies and has the potential to completely change the way people use the Internet. Veilid has no profit motive, which puts us in a unique position to promote ideals without the compromise of capitalism.
johnwalker
When I wrote “The Digital Imprimatur” almost twenty years ago (published on 2003-09-13), I was motivated by the push for mandated digital rights management with hardware enforcement, attacks on anonymity on the Internet, the ability to track individuals’ use of the Internet, and mandated back-doors that defeated encryption and other means of preserving privacy against government and corporate surveillance. //
This time it’s called “Web Environment Integrity 1” (WEI), and it comes, not from Microsoft but from the company that traded in their original slogan of “Don’t be evil 1” for “What the Hell, evil pays a lot better!”—Google.
So, what is WEI? Let’s start with a popular overview from Ars Technica.
App privacy policies openly contradict the far more visible "nutrition labels." //
Mozilla rates a few Google apps like Gmail as "needs improvement," but that's missing the forest for the trees. The report doesn't dive into this, but for Android, Google likes to do privacy sleight-of-hand and center the discussion around the idea of "app privacy," when "OS privacy"—privacy from Google—should probably be more of a concern. Google and your device manufacturer both have system-level access to the OS that exists outside the app security model, so they can basically do whatever they want on your phone, including collecting all your data. //
The same "privileged permissions" model also applies to preinstalled apps, which is part of the reason Facebook works so hard to be preinstalled on most Android phones—more permissions means better spying. It would be nice if the Play Store labels were accurate, too, but nobody wants to talk about the entire OS.
Much is known about how the federal government leverages location data by serving warrants to major tech companies like Google or Facebook to investigate crime in America. However, much less is known about how location data influences state and local law enforcement investigations. It turns out that's because many local police agencies intentionally avoid mentioning the under-the-radar tech they use—sometimes without warrants—to monitor private citizens.
As one Maryland-based sergeant wrote in a department email, touting the benefit of "no court paperwork" before purchasing the software, "The success lies in the secrecy."
This week, an investigation from the Electronic Frontier Foundation and Associated Press—supported by the Pulitzer Center for Crisis Reporting—has made public what could be considered local police's best-kept secret. Their reporting revealed the potentially extreme extent of data surveillance of ordinary people being tracked and made vulnerable just for moving about small-town America.
For those who want to lock things down without going offline and moving to a bunker in New Zealand, the first step is to assess the following things:
- What in my digital life can give away critical information tied to my finances, privacy, and safety?
- What can I do to minimize those risks?
- How much risk reduction effort is proportional to the risks I face?
- How much effort can I actually afford?
First, if you're not at home, you should always lock your device before you put it down, no exceptions. Your phone should be locked with the most secure method you're comfortable with—as long as it's not a 4-digit PIN, which isn't exactly useless but is definitely adjacent to uselessness. For better security, use a password or a passcode that's at least six characters long—and preferably longer. //
Second, set your device to require a password immediately after it’s been locked. //
Also, regularly back up your phone. //
[Don't install bad apps -- consider carefully where it comes from, what it does, if you really need it.] //
Consider turning off Wi-Fi when you’re away from home. Your device may otherwise be constantly polling for the network SSIDs in its history to reconnect automatically or to connect to anything that looks like a carrier’s Wi-Fi network. When this happens, your device gives away information about networks you’ve seen and might allow a hostile network access point to connect. Also, your phone's Wi-Fi MAC address could be used to fingerprint your device and track it. //
The same goes for Bluetooth. If your device has Bluetooth turned on, it’s broadcasting information that could identify it—and you. //
Along those same lines, name your device anything other than [Your Name]’s iPhone. Your phone's network name is broadcast all around you, and it's like holding up a beacon saying "Hello, my name is..." //
[Malware protection on your PC] Even allowing Windows Defender to run in the background provides a significant bump in protection over nothing, and disabling it without a very good reason is a very bad idea. //
[Keep your OS & software up to date -- install updates as soon as they are available ]
[Turn on Windows Firewall when in public]
In the event that your physical device is compromised, you can minimize damage by caring for your actual data. To prevent all types of data loss, back up your data—in encrypted form and offline (either locally or in the cloud) so that ransomware doesn’t get the backups, too. Keep multiple backups just in case, because if your latest backup contains the compromised or encrypted files, it's useless.
And don't just back up your data, use full-disk encryption. Period. It's a one-time setting to activate and there are no excuses for not using it. Full-disk encryption transparently encrypts your hard drive so data can’t be read off of it without your credentials. //
Wi-Fi access points and routers that support firmware or software updates add another layer to the security of your devices while web browsing. If you have an older Wi-Fi access point that you can’t update, toss it. //
And, finally, use a password manager. An easy-to-guess password renders all other security efforts moot. Whether it’s a password built into your web browser of choice or a standalone program, use one. Chrome, Firefox, and Safari all have reasonably secure password managers, and you can replicate passwords for web accounts across devices. If you don't like the idea of a password manager because you're one of those folks who just uses letmein123! as your password everywhere, you need to decide if the convenience is worth the price you'll eventually pay when you're compromised. (Spoiler alert: it's not.)
You can do a number of things to reduce the risks posed by data breaches and identity fraud. The first is to avoid accidentally exposing the credentials you use with accounts. A data breach of one service provider is especially dangerous if you haven’t followed best practices in how you set up credentials. These are some best practices to consider:
-
Use a password manager
-
When possible, use two-factor or multi-factor authentication ("2FA" or "MFA"). This combines a password with a second, temporary code or acknowledgment from someplace other than your web browser or app session. Two-factor authentication ensures that someone who steals your password can’t use it to log in. If at all possible, don’t use SMS-based 2FA, because this is more prone to interception.
-
Set up a separate email address or email alias for your high-value web accounts so that all email regarding them is segmented off from your usual email address. This way, if your primary email address is caught up in a data leak, attackers won’t be able to use that address to try to log in to accounts you care about.
-
If you're a US resident, make sure to claim an account for your Social Security number from the IRS for tax information access and other purposes.
-
Consider locking your credit reports to reduce identity theft risks.
Fourteen of the world's leading computer security and cryptography experts have released a paper arguing against the use of client-side scanning because it creates security and privacy risks.
Client-side scanning (CSS, not to be confused with Cascading Style Sheets) involves analyzing data on a mobile device or personal computer prior to the application of encryption for secure network transit or remote storage. CSS in theory provides a way to look for unlawful content while also allowing data to be protected off-device.
Apple in August proposed a CSS system by which it would analyze photos destined for iCloud backup on customers' devices to look for child sexual abuse material (CSAM), only to backtrack in the face of objections from the security community and many advocacy organizations.
The paper [PDF], "Bugs in our Pockets: The Risks of Client-Side Scanning," elaborates on the concerns raised immediately following Apple's CSAM scanning announcement with an extensive analysis of the technology.
Penned by some of the most prominent computer science and cryptography professionals – Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso – the paper contends that CSS represents bulk surveillance that threatens free speech, democracy, security, and privacy.
"In this report, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance," the paper says.
"Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused." //
But the paper notes that this approach depends on Apple being willing and able to enforce its policy, which might not survive insistence by nations that they can dictate policy within their borders.
"Apple has yielded to such pressures in the past, such as by moving the iCloud data of its Chinese users to three data centers under the control of a Chinese state-owned company, and by removing the 'Navalny' voting app from its Russian app store," the paper says.
And even if Apple were to show unprecedented spine by standing up to authorities demanding CSS access, nations like Russia and Belarus could collude, each submitting a list of supposed child-safety image identifiers that in fact point to political content, the paper posits.
"In summary, Apple has devoted a major engineering effort and employed top technical talent in an attempt to build a safe and secure CSS system, but it has still not produced a secure and trustworthy design," the paper says. //
CSS, the paper says, entails privacy risks in the form of "upgrades" that expand what content can be scanned and adversarial misuse.
And it poses security risks, such as deliberate efforts to get people reported by the system and software vulnerabilities. The authors conclude that CSS systems cannot be trustworthy or secure because of the way they're designed.
"The proposal to preemptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access," the paper says.
"Instead of having targeted capabilities such as to wiretap communications with a warrant and to perform forensics on seized devices, the agencies’ direction of travel is the bulk scanning of everyone’s private data, all the time, without warrant or suspicion. That crosses a red line. Is it prudent to deploy extremely powerful surveillance technology that could easily be extended to undermine basic freedoms?"
From the tech reporting sector, Google is taking blow after blow after blow. There are massive security vulnerabilities in what they offer you and I every day, and those privacy issues are going to start becoming enough of an issue to make the government look a little more closely at them.
So how do they keep the government from looking at them? They announce something that they know enough people in government will like and take credit for their “brave” stance and business policy. //
The latest tracking nightmare for Chrome users comes in two parts. First, Google has ignored security warnings and launched a new Chrome API to detect and report when you’re “idle,” i.e., not actively using your device. Apple warns “this is an obvious privacy concern,” and Mozilla that it’s “too tempting an opportunity for surveillance.” //
they are also banking on Congress and its ongoing fascination with trying to regulate Facebook that they can keep a low profile on all this privacy stuff and not have to worry about a Congressional investigation.
So that’s why they are choosing right now to go after so-called “climate deniers.” They are shifting the focus away from them at a time when it’s very easy to distract their users and Congress. But Google is going to find itself in increasing trouble before too long, and Congress had better start looking deeper into these security issues because Google is going to cause an insane amount of identity theft before too long.
Yesterday, independent newsroom ProPublica published a detailed piece examining the popular WhatsApp messaging platform's privacy claims. The service famously offers "end-to-end encryption," which most users interpret as meaning that Facebook, WhatsApp's owner since 2014, can neither read messages itself nor forward them to law enforcement.
This claim is contradicted by the simple fact that Facebook employs about 1,000 WhatsApp moderators whose entire job is—you guessed it—reviewing WhatsApp messages that have been flagged as "improper." //
The loophole in WhatsApp's end-to-end encryption is simple: The recipient of any WhatsApp message can flag it. Once flagged, the message is copied on the recipient's device and sent as a separate message to Facebook for review.
Messages are typically flagged—and reviewed—for the same reasons they would be on Facebook itself, including claims of fraud, spam, child porn, and other illegal activities. When a message recipient flags a WhatsApp message for review, that message is batched with the four most recent prior messages in that thread and then sent on to WhatsApp's review system as attachments to a ticket. //
Although nothing indicates that Facebook currently collects user messages without manual intervention by the recipient, it's worth pointing out that there is no technical reason it could not do so. The security of "end-to-end" encryption depends on the endpoints themselves—and in the case of a mobile messaging application, that includes the application and its users.
An "end-to-end" encrypted messaging platform could choose to, for example, perform automated AI-based content scanning of all messages on a device, then forward automatically flagged messages to the platform's cloud for further action. Ultimately, privacy-focused users must rely on policies and platform trust as heavily as they do on technological bullet points. //
Although WhatsApp's "end-to-end" encryption of message contents can only be subverted by the sender or recipient devices themselves, a wealth of metadata associated with those messages is visible to Facebook—and to law enforcement authorities or others that Facebook decides to share it with—with no such caveat.
ProPublica found more than a dozen instances of the Department of Justice seeking WhatsApp metadata since 2017. These requests are known as "pen register orders," terminology dating from requests for connection metadata on landline telephone accounts. ProPublica correctly points out that this is an unknown fraction of the total requests in that time period, as many such orders, and their results, are sealed by the courts.
Virtual idols are the future of false religion. With 3 billion users and zero sense of sacred boundaries, Facebook is poised to lead this revolution. //
The Church of Facebook is set to capture the human soul in silicon. On July 25, the New York Times reported that since 2017 the social media giant has quietly cultivated exclusive partnerships with select religious communities. As always, money is involved.
While Facebook’s ultimate goals remain sealed behind non-disclosure agreements, the Times article does hint at things to come: “The company aims to become the virtual home for religious community, and wants churches, mosques, synagogues and others to embed their religious life into its platform, from hosting worship services and socializing more casually to soliciting money.”
“The partnerships reveal how Big Tech and religion are converging,” the Times continues. “Facebook is shaping the future of religious experience itself, as it has done for political and social life.”
In other words, ultra-mod spiritual centers will be blessed by mass data extraction, algorithmic polarization, and censorship of theological “misinformation.”
If Facebook’s history is any guide, every digital prayer will be scooped up and turned into a data point. Livestreamed preachers who deny the sanctity of LGBT lifestyles will be flagged and punished as “extremists.” Best of all, smartphone-addicted congregants can donate their last widow’s mite with the touch of a virtual button. Sounds like a little slice of heaven, doesn’t it?
Apple’s new solution will load all remote content using multiple proxy servers. This will do two things. First, images can be presented on all your emails without any risk of tracking—they’re being served from Apple’s own servers. And second, marketeers will receive a near 100% open-rate for their emails, rendering it useless data.
So, unless you click a link from within an email, there should be no way to harvest any data from your email browsing activity. The only thing Apple will provide is a broad idea of the region you’re in, to ensure any context and language is right. //
The flight to privacy, fueled by Apple and others, is shining an awkward light on Google, Facebook and the data-driven digital marketing industry. “The everyday user is waking up to the importance of privacy,” security researcher Sean Wright says. “Anything that helps them keep control over their data is a step in the right direction, but privacy is not ‘one size fits all’. The power to choose how and where data is used should always be in the hands of the individual. As such, education and transparency so users can make informed decisions about which mail clients they should use is key.”
In this paper we review the principles of Zero Trust security, and the aspects of IoT that make proactive application of Zero Trust to IoT different than its application to the workforce. The key capabilities of Zero Trust for IoT are defined for companies with an IoT strategy, and next steps highlight Microsoft solutions enabling your journey of Zero Trust for IoT.
As organizations increasingly rely on automated systems for core business processes, the importance of improving the security posture of IoT is becoming business-critical. The Zero Trust model based on the principles of “never trust” and “always verify” can be applied to IoT to improve security posture.
Facebook is pushing a mysterious and aggressive ‘privacy update’ on WhatsApp users. Here’s why
Fri 14 May 2021 06.14 EDT //
Facebook, for its part, has spent the months since the announcement downplaying the significance of these privacy updates by arguing that its latest changes will only affect communication with business accounts (WhatsApp Business was launched in January 2018). In truth, the changes will allow Facebook to collect payment and transaction data from WhatsApp users, meaning Facebook will be able to gather even more data and target users with ever more personalized ads. WhatsApp has also removed a passage in its privacy policy about opting out of sharing data with Facebook. Facebook argues that this simply reflects what’s been in place since 2016. That is exactly the problem.
Today’s WhatsApp shares a great deal of information with Facebook it promised it wouldn’t, including account information, phone numbers, how often and how long people use WhatsApp, information about how they interact with other users, IP addresses, browser details, language, time zone, etc. This latest incursion has highlighted just how much data sharing has been going on for years without most users’ knowledge.
It seems that some higher powers in government think encryption is only used for nefarious purposes. //
Here, though, as my colleague Asha Barbaschow reported, are the public thoughts of the commission: If you use encryption, you're likely a crook. Which may surprise one or two iMessage and WhatsApp users. //
The commission's actual words about encrypted communication services were: "These platforms are used almost exclusively by SOC [serious and organised crime] groups and are developed specifically to obscure the identities of the involved criminal entities and enable avoidance of detection by law enforcement."
I do understand that there are many bad people in the world. I fear I have done business with some. A few may have even become my friends for a short while. //
But to suggest -- with a straight face and a public voice -- that encryption is almost exclusive to the evil seems like the sort of exaggeration that only a politician would embrace. Publicly.
tomferal sgtcornflake
15 hours ago
It's too late for that which you can't advocate in this forum as the US Army is now Woke and will soon be even more Woke when they finish their Wrong Thing Purge of their own troops. Then it will be as our Founding Fathers feared.
When a government wishes to deprive its citizens of freedom, and reduce them to slavery, it generally makes use of a standing army.
- Luther Martin, Maryland delegate to the Constitutional Convention
A standing army is one of the greatest mischief that can possibly happen. Without standing armies liberty can never be in danger, nor with large ones safe
-James Madison ("The Father of the United States Constitution", the Constitution that the Pentagon is now at war with)
There are instruments so dangerous to the rights of the nation, and which place them so totally at the mercy of their governors, that those governors, whether legislative or executive, should be restrained from keeping such instruments on foot, but in well-defined cases. Such an instrument is a standing army.
- Thomas Jefferson
In the days following the American Revolution, Jefferson became increasingly concerned about two tendencies he believed were intrinsically connected: the concentration of power in a centralized government, and the establishment of a standing army [especially a career class one].
I draw my line in the sand here. I am not your enemy just because I am breathing. I am not your enemy just because I am walking or eating. I do not deserve to be marked as a “problem” for not participating in a medical treatment. Whether or not I take a vaccine is my business. If you’ve taken your vaccine, it should be of no worry if I do or not. We don’t require people to prove they’ve gotten their flu vaccine every year. You get yours and then get on with your life, assuming you have inoculated yourself. What is the difference with a COVID vaccine?
It is about control. If you can be required to show your medical history to get on a bus or go to a concert, where does it stop?
A lot of people on the left think that “where does it stop” question is a bit hysterical or paranoid. These are the same people who scream all day long about how oppressive and racist the American government is and has been since our founding. Where does it stop? Ask Frederick Douglass or Harriet Tubman or any American slave or victim of Democrat Jim Crow laws during the Civil Rights Era. The government never stops. If we can’t trust them when it comes to minorities, why would we trust them when it comes to our private health information? //
I expect the government to protect my right to move about my country and participate in commerce freely, regardless of my medical history. This is my line. Here and farther. It is only a hop, skip, and a sew-on patch to complete fascism.
“Today I issued an executive order prohibiting the use of so-called COVID-19 vaccine passports, Gov. DeSantis announced in a tweet linking to his executive order, which he says serves as a stop-gap measure while Florida’s legislature crafts legislation to codify the elements of his order.
The order prohibits Florida’s government from issuing “standardized documentation” certifying, and from publishing, a person’s vaccine status: //
Likewise, businesses are prohibited from denying access and service to patrons who do not provide proof they’ve been vaccinated: //
But it goes past the practical aspects of all this. Privacy issues are a big concern. Why should a business be able to demand protected, private medical information in order to sell you a product, especially when there a myriad of reasons why you might be “safe” but still not have the vaccine? That’s not allowed on any other front, and it should not be allowed here. ///
It is problematic to be requiring a vaccine that is still regarded as "experimental" and has only been given an "emergency use authorization", not full FDA approval. The initial study that was the foundation for the EUA still has 18-20 months yet before it is completed.
On top of that, requiring individuals to provide private health details to private citizens or businesses against their will is a violation of HIPAA law and regulations.
Are Americans supposed to accept that the government, private employers, and businesses such as airlines and Costco may stop you and demand that you show your COVID-19 papers? //
Currently, the COVID-19 vaccines are not U.S. Food and Drug Administration-approved, but authorized only for emergency use. As an investigational product, the statute governing emergency use authorizations provides that the recipient be advised of his or her option to accept or refuse administration of the vaccine, something a DC District court considered in a 2003 case that ruled against forcing soldiers to take the then-experimental anthrax vaccine //
We have a constitutionally implied fundamental right to privacy; various guarantees in the Bill of Rights “have penumbras, formed by emanations from those guarantees that help give them life and substance…creat[ing] “zones of privacy.” Griswold v. Connecticut, 381 U.S. 479, 484 (1965). The right to privacy encompasses everything from what we do in the privacy of our own bedrooms, to how we educate our children, and to what we choose to insert into our bodies.
The right to privacy also incorporates a right to be left alone, a concept dating as far back as an 1890 Harvard Law Review Article by Samuel Warren and Louis Brandeis, in which they noted that our laws are universal and eternal, “grow[ing] to meet the new demands of society.” As time passed, “[g]radually the scope of [] legal rights broadened; and now the right to life has come to mean the right to enjoy life, — the right to be let alone; the right to liberty secures the exercise of extensive civil privileges…” //
Even the World Health Organization resists a COVID-19 vaccination passport, saying that “At the present time, do not introduce requirements of proof of vaccination or immunity for international travel as a condition of entry as there are still critical unknowns regarding the efficacy of vaccination in reducing transmission and limited availability of vaccines. Proof of vaccination should not exempt international travelers from complying with other travel risk reduction measures.”
And what of HIPAA? The Health Insurance Portability and Accountability Act applies to “covered entities” such as health-care providers, health-care clearinghouses, or other organizations that would be involved in the transmission of protected health information, or PHI. Covered entities cannot share your information — but you can. //
COVID-19 passports are government surveillance on steroids. COVID-19 has seen courts sanction unprecedented abrogation of our fundamental civil rights, in some ways that may leave permanent scars. September 11, 2001 led to the PATRIOT Act, which in retrospect a growing number of constitutional conservatives now decry, for its expansions have been monstrous.
Americans should not so lightly give up their liberty for commercial expediency, elite orthodoxy, or even illusory notions of safety. If we do, what downward ratchet on liberty will the next crisis bring?
Employees worry that, should Signal fail to build policies and enforcement mechanisms to identify and remove bad actors, the fallout could bring more negative attention to encryption technologies from regulators at a time when their existence is threatened around the world. //
“The world needs products like Signal — but they also need Signal to be thoughtful,” said Gregg Bernstein, a former user researcher who left the organization this month over his concerns. “It’s not only that Signal doesn’t have these policies in place. But they’ve been resistant to even considering what a policy might look like.” //
For years, the company has faced complaints that its requirement that people use real phone numbers to create accounts raises privacy and security concerns. And so Signal has begun working on an alternative: letting people create unique usernames. But usernames (and display names, should the company add those, too) could enable people to impersonate others — a scenario the company has not developed a plan to address, despite completing much of the engineering work necessary for the project to launch. //
Marlinspike said, it was important to him that Signal not become neutered in the pursuit of a false neutrality between good and bad actors. Marginalized groups depend on secure private messaging to safely conduct everything from basic day-to-day communication to organized activism, he told me. Signal exists to improve that experience and make it accessible to more people, even if bad actors might also find it useful.
“I want us as an organization to be really careful about doing things that make Signal less effective for those sort of bad actors if it would also make Signal less effective for the types of actors that we want to support and encourage,” he said. “Because I think that the latter have an outsized risk profile. There’s an asymmetry there, where it could end up affecting them more dramatically.”
Telegram does offer “secret chats,” which provides end-to-end encryption, albeit only from one device to another, between just two people. It won’t sync across multiple devices and it won’t work for groups. Telegram says this is technically difficult to do, albeit both Signal and iMessage have managed to execute this level of encryption flawlessly. In reality, Telegram’s architecture is designed to provide fast and seamless multi-device access to a cloud repository—its priorities are different. //
The Signal settings you must change are the “registration lock” and the “screen lock.” Of these, the registration lock is the critical one. This means you’ll need that PIN to install your Signal account on a new phone, stopping anyone hijacking your account. If someone does hijack your account, they won’t get access to your message history—just messages sent while they have access. This is similar to WhatsApp, albeit such hijacks have become a major issue. As Signal gains popularity, the risk will increase.