5333 private links
What are Canarytokens
You'll be familiar with web bugs, the transparent images which track when someone opens an email. They work by embedding a unique URL in a page's image tag, and monitoring incoming GET requests.
Imagine doing that, but for file reads, database queries, process executions or patterns in log files. Canarytokens does all this and more, letting you implant traps in your production systems rather than setting up separate honeypots.
#
Why should you use them
Network breaches happen. From mega-corps, to governments. From unsuspecting grandmas to well-known security pros. This is (kinda) excusable. What isn't excusable, is only finding out about it, months or years later.
Canarytokens are a free, quick, painless way to help defenders discover they've been breached (by having attackers announce themselves.)
But despite their increasing complexity, a great many initial intrusions that lead to data theft could be nipped in the bud if more organizations started looking for the telltale signs of newly-arrived cybercriminals behaving like network tourists, Cisco says.
“One of the most important things to talk about here is that in each of the cases we’ve seen, the threat actors are taking the type of ‘first steps’ that someone who wants to understand (and control) your environment would take,” Cisco’s Hazel Burton wrote. “Examples we have observed include threat actors performing a ‘show config,’ ‘show interface,’ ‘show route,’ ‘show arp table’ and a ‘show CDP neighbor.’ All these actions give the attackers a picture of a router’s perspective of the network, and an understanding of what foothold they have.” //
when those stolen resources first get used by would-be data thieves, almost invariably the attackers will run a series of basic commands asking the local system to confirm exactly who and where they are on the victim’s network.
This fundamental reality about modern cyberattacks — that cybercriminals almost always orient themselves by “looking up” who and where they are upon entering a foreign network for the first time — forms the business model of an innovative security company called Thinkst, which gives away easy-to-use tripwires or “canaries” that can fire off an alert whenever all sorts of suspicious activity is witnessed.
“Many people have pointed out that there are a handful of commands that are overwhelmingly run by attackers on compromised hosts (and seldom ever by regular users/usage),” the Thinkst website explains. “Reliably alerting when a user on your code-sign server runs whoami.exe can mean the difference between catching a compromise in week-1 (before the attackers dig in) and learning about the attack on CNN.”
These canaries — or “canary tokens” — are meant to be embedded inside regular files, acting much like a web beacon or web bug that tracks when someone opens an email. //
Thinkst operates alongside a burgeoning industry offering so-called “deception” or “honeypot” services — those designed to confuse, disrupt and entangle network intruders. But in an interview with KrebsOnSecurity, Thinkst founder and CEO Haroon Meer said most deception techniques involve some degree of hubris. //
One nice thing about canary tokens is that Thinkst gives them away for free. Head over to canarytokens.org, and choose from a drop-down menu of available tokens
Those scary warnings of juice jacking in airports and hotels? They’re mostly nonsense | Ars Technica
An FBI spokesperson told me this month’s tweet was “a standard PSA-type post—nothing new” and that it stemmed from the FCC warning. “This was a general reminder for the American public to stay safe and diligent, especially while traveling.” They added: “I am sorry I can’t give you an answer that is more newsy.” When I asked an FCC spokesperson what the basis was for the agency to update its warning five days later, they said it was prompted by the Denver FBI tweet.
What this means is that state and federal authorities and hundreds of news outlets—none of them with any expertise in cybersecurity—have generated a continuous feedback loop. This vicious cycle has done little more than scare the public into eschewing charging stations when there’s wide consensus among security professionals that there’s no reason for anyone other than high-asset targets of nation-states to do so. //
Finally, besides there being no universal script that will work on hundreds or even dozens of different devices, the customized scripts are non-trivial to write. They require a high skill level and a huge amount of trial-and-error troubleshooting.
None of this is to say that people shouldn’t bring their own charging cord and wall plug when they’re out of the home or office. That is a best practice, but it's wrong to characterize it as a required practice. //
The problem with the warnings coming out of the FCC and FBI is that they divert attention away from bigger security threats, such as weak passwords and the failure to install security updates. They create unneeded anxiety and inconvenience that run the risk of people simply giving up trying to be secure.
As security researcher Kenn White recently wrote of the warnings on Mastodon: “What's the end goal here? Convince people who are down to 2 percent battery while traveling to never use modern public infrastructure? Come on. There are 20 things that threaten muggle endpoint security, and this isn't among them.”
But of all the half-baked measures we’ve grown accustomed to, few have been sillier than the longstanding policy decreeing that pilots and flight attendants undergo the same X-ray and metal detector screening as passengers. In the United States, this went on for a full twelve years after September 11th, until finally a program was put in place allowing crewmembers to bypass the normal checkpoint. It’s a simple enough process that confirms the individual’s identity by matching up airline and government-issue credentials with information stored in a database. That it took twelve years for this to happen is a national embarrassment, especially when you consider that tens of thousands of airport ground workers, from baggage loaders to cabin cleaners and mechanics, were exempt from screening all along. You read that correctly. An airline pilot who once flew bombers armed with nuclear weapons was not to be trusted and was marched through the metal detectors, but those who cater the galleys, sling the suitcases, and sweep out the aisles were been able to saunter onto the tarmac unmolested.
If there has been a more ringing, let-me-get-this-straight scenario anywhere in the realm of airport security, I’d like to hear it. The TSA will point out how the privileges granted to tarmac workers have, from the outset, been contingent upon fingerprinting, a ten-year background investigation, crosschecking against terror watch lists, and are additionally subject to random physical checks. All true, but the background checks for pilots are no less thorough, so why were they excluded?
Nobody is implying that the hardworking caterers, baggage handlers, and the rest of the exempted employees out there are terrorists-in-waiting. Nevertheless, this was a double standard so titanically idiotic that it can hardly be believed. Yet there it was, for longer than a decade.
Why am I bringing this up if it’s no longer happening? Because it’s still making my head spin, for one. But also, more valuably, it gives us insight into the often dysfunctional thinking of the security state. And past as prologue: such wasteful procedures, embedded for so long, can only make us skeptical about the future. //
The 1960s through the 1990s were a sort of Golden Age of Air Crimes, rife with hijackings and bombings. Between 1968 and 1972, U.S. commercial aircraft were hijacked at a rate of — wait for it — nearly once per week. Hijackings were so routine that over a four-month period in 1968 there were three instances of multiple aircraft being commandeered on the same day. Later, in the five-year span between 1985 and 1989, there were no fewer than six major terrorist attacks against commercial planes or airports, including the Libyan-sponsored bombings of Pan Am 103 and UTA 772; the bombing of an Air India 747 that killed 329 people; and the saga of TWA flight 847.
Flight 847, headed from Athens to Rome in June 1985, was hijacked by Shiite militiamen armed with grenades and pistols. The purloined 727 then embarked on a remarkable, seventeen-day odyssey to Lebanon, Algeria, and back again. At one point passengers were removed, split into groups, and held captive in downtown Beirut. A U.S. Navy diver was murdered and dumped on the tarmac, and a photograph of TWA Captain John Testrake, his head out the cockpit window, collared by a gun-wielding terrorist, was broadcast worldwide and became an unforgettable icon of the siege.
I say “unforgettable,” but that’s the thing. How many Americans remember flight 847? It’s astonishing how short our memories are. And partly because they’re so short, we are easily frightened and manipulated. Imagine TWA 847 happening tomorrow. Imagine six terror attacks against planes in a five-year span. Imagine something like the Bojinka plot being pulled off successfully. The airline industry would be decimated, the populace frozen in fear. It would be a catastrophe of epic proportion—of wall-to-wall coverage and, dare I suggest, the summary surrender of important civil liberties. What is it about us, as a society, that has made us so unable to remember and unable to cope? //
IN PERSPECTIVE: THE GOLDEN AGE OF AIR CRIMES
The safest exchange points are easily accessible and in a well-lit, public place where transactions are visible to others nearby. Try to arrange a meeting time that is during daylight hours, and consider bringing a friend along — especially when dealing with high-value items like laptops and smart phones.
Safeexchangepoint.com also advises that police or merchants that host their own exchange locations generally won’t get involved in the details of your transaction unless specified otherwise, and that many police departments (but not all) are willing to check the serial number of an item for sale to make sure it’s not known to be stolen property. //
- Beware of common scams, like checks for an amount higher than the amount of the deal; “cashier’s checks” that are forged and presented when the bank is closed.
- If you are given a cashier’s check, money order or other equivalent, call the bank — at the number listed online, not a number the buyer gives you — to verify the validity of the check.
SafeTradeSpots are designated locations at law enforcement offices where buyers and sellers can meet in public under surveillance to complete in-person transactions. There’s no charge for the service.
Every month, tens of millions of Americans use online marketplaces to buy and sell secondhand in their own neighborhoods.
With clear benefits for consumers, local businesses and the environment, this kind of ecommerce is among the most popular and fastest growing sectors online, facilitating tens of billions of dollars in local transactions annually.
We started SafeExchangePoint.com to support communities across the country that want to offer their own local, convenient and secure exchange points where buyers and sellers can complete transactions, and to help consumers find the nearest SafeExchangePoint.
SafeTrade is a program to help users of online classifieds trade safely.
The program launched in 2015 in response to the thousands of transactions initiated on Craigslist and other classified sites that have gone awry. At this writing, 105 killings have been linked to Craigslist.
There’s nothing wrong with Craigslist --- it’s a wonderful site and a wonderful service --- but safety for users should be paramount. SafeTrade is designed to help everyone stay safe.
SafeTrade is open to all police departments and law enforcement organizations to offer. There’s no charge, no trademark, no fee. We encourage all police department and law enforcement agencies to join SafeTrade. (We’ll even supply the logo by email to any department that wants to use it, and we’re also supplying banners to a few departments as a “starter.”)
Republicans should stop playing defense when they haven’t done anything wrong lest they be consumed. Preempt the ineffective policy prescriptions of the left, which only serve to garner unrelated political goals, by introducing actual “common-sense” reforms. Have those reforms not target the Second Amendment, but rather focus on the direct protection of schools, enforcing existing laws, and getting the mentally ill the help they need. If Democrats want to say no to all that, let them go on the record.
Democrats like Biden don’t want to look at solutions that might actually do something about addressing such situations such as making the schools more secure with things like single-point entry at schools as Andrew Pollack, whose daughter Meadow was killed in the Parkland shooting, advocates.
Then you might also want to try to enforce the laws that already exist and follow up on the warning signs that always seem to be there in such cases. It’s easy to shout “do something.” It’s harder to try to deal with difficult issues in a way that will truly have a result.
But Democrats have instead concentrated on things like “defund the police” which has made things harder. We’ve also seen progressive prosecutors not holding criminals to account. We see a lack of consequence for a lot of criminal actions now. Even in the case of Hunter Biden, gun laws weren’t enforced when it came to him allegedly not truthfully responding about his past drug use in filling out an application for a gun.
What we should be looking to stop is criminal action, not trying to once again further demonize guns, and once again not truly addressing the issues.
Most aspects of American life today are more secure than the average elementary school. It takes more to walk into most concerts than it does to enter into most schools. Is that ok with you? If we can have measures in place to protect attendees to sporting events, surely we can do the same for our children.
I hope that this time we can actually focus on getting something done, instead of immediately retreating to our partisan corners and fighting with each other. If our leaders are truly interested in making our children safe, they will make sure that no school in the United States is a soft target. It won’t prevent all of these tragedies, but it will go a long way to preventing most of them.
Finally, we have to admit something that we rarely discuss in public life anymore. We will never truly and fully fix this problem until we turn our hearts back to God. I know that is offensive to some, but the truth is freedom without God is not freedom, but anarchy. Laws, while important, cannot fix the heart. As the Rev. Dr. Martin Luther King Jr said, “Man cannot save himself, for man is not the measure of all things and humanity is not God. Bound by the chains of his own sin and finiteness, man needs a Savior.”
How Apple, Google, and Microsoft will kill passwords and phishing in one stroke
You've heard for years that easier, more secure logins are imminent. That day is here.
Fourteen of the world's leading computer security and cryptography experts have released a paper arguing against the use of client-side scanning because it creates security and privacy risks.
Client-side scanning (CSS, not to be confused with Cascading Style Sheets) involves analyzing data on a mobile device or personal computer prior to the application of encryption for secure network transit or remote storage. CSS in theory provides a way to look for unlawful content while also allowing data to be protected off-device.
Apple in August proposed a CSS system by which it would analyze photos destined for iCloud backup on customers' devices to look for child sexual abuse material (CSAM), only to backtrack in the face of objections from the security community and many advocacy organizations.
The paper [PDF], "Bugs in our Pockets: The Risks of Client-Side Scanning," elaborates on the concerns raised immediately following Apple's CSAM scanning announcement with an extensive analysis of the technology.
Penned by some of the most prominent computer science and cryptography professionals – Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso – the paper contends that CSS represents bulk surveillance that threatens free speech, democracy, security, and privacy.
"In this report, we argue that CSS neither guarantees efficacious crime prevention nor prevents surveillance," the paper says.
"Indeed, the effect is the opposite. CSS by its nature creates serious security and privacy risks for all society while the assistance it can provide for law enforcement is at best problematic. There are multiple ways in which client-side scanning can fail, can be evaded, and can be abused." //
But the paper notes that this approach depends on Apple being willing and able to enforce its policy, which might not survive insistence by nations that they can dictate policy within their borders.
"Apple has yielded to such pressures in the past, such as by moving the iCloud data of its Chinese users to three data centers under the control of a Chinese state-owned company, and by removing the 'Navalny' voting app from its Russian app store," the paper says.
And even if Apple were to show unprecedented spine by standing up to authorities demanding CSS access, nations like Russia and Belarus could collude, each submitting a list of supposed child-safety image identifiers that in fact point to political content, the paper posits.
"In summary, Apple has devoted a major engineering effort and employed top technical talent in an attempt to build a safe and secure CSS system, but it has still not produced a secure and trustworthy design," the paper says. //
CSS, the paper says, entails privacy risks in the form of "upgrades" that expand what content can be scanned and adversarial misuse.
And it poses security risks, such as deliberate efforts to get people reported by the system and software vulnerabilities. The authors conclude that CSS systems cannot be trustworthy or secure because of the way they're designed.
"The proposal to preemptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access," the paper says.
"Instead of having targeted capabilities such as to wiretap communications with a warrant and to perform forensics on seized devices, the agencies’ direction of travel is the bulk scanning of everyone’s private data, all the time, without warrant or suspicion. That crosses a red line. Is it prudent to deploy extremely powerful surveillance technology that could easily be extended to undermine basic freedoms?"
Legally the Protec Elite and Ruby Exclusive key profiles have the same level of key control, no dealer other than the dealer who originally issued the keys (or the Factory) can cut more keys. Technically, with the standard Protec Elite key profile, all Abloy Protec/Protec2 dealers in the US have access to this key blank. This is a slight security risk as it does mean they could duplicate a Protec/Protec2 Elite key if they wanted to (but they would break their contract with Abloy in doing so and we have yet to hear of a dealer doing so). Our Ruby Exclusive key profile is a key blank that is only issued to us, which removes the option for another Abloy dealer to duplicate your ruby key as they do not have access to the key blanks. A Protec Elite key will not even fit in a Ruby Exclusive lock. The Ruby Exclusive is a very secure level of key control giving peace of mind to even the most worried lock owner.
myliit • June 5, 2020 6:30 PM
Our host in the making. From the OP.
“One of us (Bruce) remembers that as a child he once brute-forced a combination padlock in his house. A four-digit lock’s 10,000 possible combinations might be enough to keep out a burglar, but fail against a child with unlimited access and nothing better to do that day.”
Clive Robinson • June 6, 2020 7:00 AM
https://www.schneier.com/blog/archives/2020/06/new_research_pr.html#c6812062
@ mylitt, Bruce, ALL,
"A four-digit lock’s 10,000 possible combinations might be enough to keep out a burglar, but fail against a child with unlimited access and nothing better to do that day."
They used to say that,
Necessity is the mother of invention.
However "Curiosity" is the fundamental reason we learn about our world.
I remember learning not only to undo combination locks by "feel" at an early age, but also how to pick simple bike locks and desk/cupboard locks with home made skeleton keys and later picks. And at some point learning again self taught how to do what is called in the profession "impressioning".
My parents used to tell other adults as a precautionary tale about "curiosity" of certain bad habits I had when younger than four, that I don't remember. Apparently my little fingers had learnt some technique for "worrying" nuts and bolts, such that given time I could undo them without the need of spanners etc and amongst other things had taken the bolts out of a set ot wooden step ladders much to my fathers anoyance when they fell appart on him one day.
However I think he was only briefly annoyed, because unlike my mother he actively encoraged my curiosity and tinkering. It occasionally went wrong like when I chopped the corner of my index finger off with a "Stanly knife" (modeling knife, like an up market box cutter). But it grew back so nothing real lost and a lesson learned... Which is not so much cutting yourself hurts, which it does, but it carries on hurting, then itching, and finally is to soft for half a year, which when you are eight is a very long time :-(
My curiosity with locks taught me not just to impression keys but how to cut keys on sight, which later gave rise to me about using photographs for cutting them. Which supprised our host Bruce when I first mentioned it, but then enabled us to all have a good laugh at the TSA for being idiots when they published a photograph of all the TSA approved luggage lock keys.
It's also enabled some as we now know to use 3D Printers to automate the process of key cutting...
But curiosity also leads to reading, and when young I read adventure books and graduated onto detective stories and SciFi. I worked out by accident when very young how to make fake finger prints. From a very early age I used to collect the red wax from Edam Cheese, it had some nice properties that whilst fairly solid at room temprature it became nicely soft at hand temprature if you "worked it". The problem was in working it your fingerprints showed up. The only way I had to get rid of them at the time was to roll the wax into a ball in the palms of my hands. A little while after that in junior school I got to play with "Copydex Glue" it was the "Pritstick" of it's day and considered to be unhalmfull to very young children. Also known as "Rubber Solution Glue" it had an anoying property when it dried on your hands it made a transparent layer like a second skin. As kids we quickly realised you could use it to make "fake wounds" to scare other kids with, and was as much fun as the "finger in the matchbox trick". At some point I realised that you could make a mould of somebodies finger print with the warm Edam Cheese wax and then paint Copydex in it to make "fake fingerprints" all good fun. But it was not untill I showed other kids how you could use a little light oil or grease (fat from cooking a chicken works) to actually leave a fingerprint on objects, that my brain suddenly realised just how powerfull it was in that you could also put the fake skin with finger print onto gloves and leave false evidence.
I thought it was "pretty neat" but some time later on when reading a Sherlock Holmes Story about a crooked builder who faked his own murder that it mentioned using a finger print impression used in a wax seal on the back of a letter to make a fake finger tip to leave a finger print in blood to frame a solicitor.
However it sparked a life long interest in "faking forensics" and later "faking biometrics" which has led me down all sorts of twisty little passages of science most will never have heard of...
So if you have young children that exhibit "curiosity" I'd encorage it a lot, they might not be rich and famous but they will I can assure you have more fun in life than many many others as you will open their minds "To a World of Wonder". They will also learn the important lesson in life that too many people make assumptions and get led astray by them and what are little more than simple parlor tricks. The fact the average person does not know something is possible, should not be taken to mean that something is impossible or even improbable if not actually very easy to do. Most "Guild Secrets" were kept not because they were special or clever but because they enabled Guild Members to profit substantially by others ignorance. The only difference today is we don't call them "Guild Secrets" any more at best "Trade Secrets" or by a slang such as "The Knowing", "Knowledge", etc.
The classic example of this "Guild/Trade" secret is "Hotel keys", where there is a "Hotel Master Key" that opens all doors, "Floor Masters" that open all doors on a floor for cleaners etc and "Suite Masters" where several rooms can be turned into a suite of rooms for more well healed guests with their own servants, assistants, or family. The myth sold by locksmiths is that such mechanical lock systems are "more secure" than ordinary locks and keys, when in fact they make the locks easier to pick etc... This myth also alows them to charge between five and ten times as much for each lock, and ten to twenty times as much for each key, compared to an equivalently secure lock from your local large DIY store.
I just published a new paper with Karen Levy of Cornell: "Privacy Threats in Intimate Relationships."
Abstract: This article provides an overview of intimate threats: a class of privacy threats that can arise within our families, romantic partnerships, close friendships, and caregiving relationships. Many common assumptions about privacy are upended in the context of these relationships, and many otherwise effective protective measures fail when applied to intimate threats. Those closest to us know the answers to our secret questions, have access to our devices, and can exercise coercive power over us. We survey a range of intimate relationships and describe their common features. Based on these features, we explore implications for both technical privacy design and policy, and offer design recommendations for ameliorating intimate privacy risks.
This is an important issue that has gotten much too little attention in the cybersecurity community. //
As was once pointed out by someone way more famous than the rest of us,
"Three can keep a secret as long as the other two are dead." //
lurker • June 5, 2020 6:01 PM
@Rj
This was Samson's mistake in Judges 14:18.
and again [!] in Judges 16:17. //
myliit • June 5, 2020 6:30 PM
Our host in the making. From the OP.
“One of us (Bruce) remembers that as a child he once brute-forced a combination padlock in his house. A four-digit lock’s 10,000 possible combinations might be enough to keep out a burglar, but fail against a child with unlimited access and nothing better to do that day.”
TSA Admits Liquid Ban Is Security Theater
The TSA is allowing people to bring larger bottles of hand sanitizer with them on airplanes:
Passengers will now be allowed to travel with containers of liquid hand sanitizer up to 12 ounces. However, the agency cautioned that the shift could mean slightly longer waits at checkpoint because the containers may have to be screened separately when going through security.
Won't airplanes blow up as a result? Of course not.
Would they have blown up last week were the restrictions lifted back then? Of course not.
It's always been security theater.
Interesting context:
The TSA can declare this rule change because the limit was always arbitrary, just one of the countless rituals of security theater to which air passengers are subjected every day. Flights are no more dangerous today, with the hand sanitizer, than yesterday, and if the TSA allowed you to bring 12 ounces of shampoo on a flight tomorrow, flights would be no more dangerous then. The limit was bullshit. The ease with which the TSA can toss it aside makes that clear.
Cory Doctorow’s sunglasses are seemingly ordinary. But they are far from it when seen on security footage, where his face is transformed into a glowing white orb.
At his local credit union, bemused tellers spot the curious sight on nearby monitors and sometimes ask, “What’s going on with your head?” said Doctorow, chuckling.
The frames of his sunglasses, from Chicago-based eyewear line Reflectacles, are made of a material that reflects the infrared light found in surveillance cameras and represents a fringe movement of privacy advocates experimenting with clothes, ornate makeup and accessories as a defense against some surveillance technologies. //
The motivation to seek out antidotes to an over-powerful force has political and symbolic significance for Doctorow, an L.A.-based science-fiction author and privacy advocate. His father’s family fled the Soviet Union, which used surveillance to control the masses.
“We are entirely too sanguine about the idea that surveillance technologies will be built by people we agree with for goals we are happy to support,” he said. “For this technology to be developed and for there to be no countermeasures is a road map to tyranny.” //
The lenses of normal sunglasses become clear under any form of infrared light, but the special wavelength absorbers baked into Urban’s glasses soak up the light and turn them black.
Reflectacles’ absorbent quality makes them effective at blocking Face ID on the newest iPhones. While Urban said the glasses aren’t designed to evade facial recognition that doesn’t use infrared light, they will lessen the chance of a positive match in such systems. //
L.A.-based cybersecurity analyst Kate Rose created her own fashion line called Adversarial Fashion to obfuscate automatic license-plate readers. A clothes maker on the side, she imprinted stock images of out-of-use and fake license plates onto fabric to create shirts and dresses. When the wearers walk past the AI systems at traffic stops, the machines read the images on the clothes as plates, in turn feeding junk data into the technology.
vas pup • January 20, 2020 5:07 PM
From the article - looks like the weakest link:
"Clearview’s app carries extra risks because law enforcement agencies are uploading sensitive photos to the servers of a company whose ability to protect its data is untested."
Photos from government databases are uploaded to private servers with untested security. Just speechless.
The New York Times has a long story about Clearview AI, a small company that scrapes identified photos of people from pretty much everywhere, and then uses unstated magical AI technology to identify people in other photos.
His tiny company, Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system -- whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites -- goes far beyond anything ever constructed by the United States government or Silicon Valley giants.
Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.
[...]
But without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.
And it's not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes.