5331 private links
An attacker is often required to face a number of challenges in his activities.
Two of these challenges include –
Overcome network barriers (network policies, segmentation, etc.).
Perform different operations in “stealth mode” so he won’t get caught.
One good way to deal with these challenges is by using ICMP Tunnel when trying to create a stealth connection that can cross the different barriers in the network.
ICMP Tunneling can be done by changing the Payload Data so it will contain the data we want to send.
Usually, it contains a default Payload Data such as this ASCII string — “abcdefghijklmnopqrstuvwabcdefghi”
If we encapsulate an HTTP packet inside the Payload Data, we will get the most common way of this method- sneak out of a Pay-for-WiFi.
This can be achieved by using a proxy server that waits for ping messages and sends them as needed (for example — as HTTP).
"Clickable" endnotes for Schneier's book... lots of security info to learn here.
How would you know if your DNS account had been compromised? If tampered with, an attacker could point your web and email traffic to their own controlled servers, enabling them to intercept potentially confidential information from you or your customers without your knowledge.
Emergency Directive 19-01
Recently, the US Department of Homeland Security issued its first ever Emergency Directive with a list of actions to mitigate DNS account tampering, an issue they report is on the raise.
Audit DNS Records … audit public DNS records on all authoritative and secondary DNS servers to verify they resolve to the intended location.In this post, I’ll show you how to continually monitor your DNS resolution using NodePing DNS checks to ensure your important domain names are resolving to the expected IP addresses. If anyone tamplers with your DNS records, you’ll quickly receive actionable notifications from NodePing.
My first question after reading the excerpt was the obvious Heisenberg uncertainty principal one of frequency vs time. Namely, that ~1Hz is a damn low frequency, so how much time would you have to sample for just in order to collect enough entropy to tell one person from another?
Luckily article offers a claim here as well.
an invisible, quarter-size laser spot could be kept on a target. It takes about 30 seconds to get a good return, so at present the device is only effective where the subject is sitting or standing.
It's not clear to me what's going to look different about my pulse after only 30 beats compared to somebody else? Are they measuring something more subtle than timing between pulses, like listening to the hiss of individual blood cells streaming through schlerotic valves? :P
MIT Technology Review is reporting about an infrared laser device that can identify people by their unique cardiac signature at a distance:
A new device, developed for the Pentagon after US Special Forces requested it, can identify people without seeing their face: instead it detects their unique cardiac signature with an infrared laser. While it works at 200 meters (219 yards), longer distances could be possible with a better laser. "I don't want to say you could do it from space," says Steward Remaly, of the Pentagon's Combatting Terrorism Technical Support Office, "but longer ranges should be possible."
Wenyao Xu of the State University of New York at Buffalo has also developed a remote cardiac sensor, although it works only up to 20 meters away and uses radar. He believes the cardiac approach is far more robust than facial recognition. "Compared with face, cardiac biometrics are more stable and can reach more than 98% accuracy," he says.
I have my usual questions about false positives vs false negatives, how stable the biometric is over time, and whether it works better or worse against particular sub-populations. But interesting nonetheless.
Secure your digital self: auditing your cloud identity
An Apple iCloud service hack highlights the need for personal cloud security.
We put more and more of ourselves in the cloud every day. E-mail, device settings, data synchronization between devices, and access to much of our digital selves is tied to a handful of cloud service accounts with Google, Apple, Microsoft, Dropbox, and others. As demonstrated dramatically over the last week, those accounts are easily put at risk if they’re too interconnected—especially since the weakest link in cloud security may be the employees of the providers themselves.
That’s what happened with Wired’s Mat Honan this weekend, when a hacker was apparently able to convince Apple technical support that he was Honan and reset Honan’s iCloud account password. That bit of social engineering allowed hackers to then get access to Honan’s Gmail and Twitter accounts, as well as his access to Gizmodo's Twitter account. He also lost control over his iOS-based devices and was even locked out of his personal computer.
As smart speakers and connected devices continue to gain popularity, it’s clear that voice interaction is the next great leap forward in UX design. But how can we as designers help brands responsibly use Amazon’s Alexa, Google’s Assistant, and Apple’s Siri to reach audiences in the clearly private space of the home? If privacy is mostly about perception, we will need to find ways of building trust through absolute transparency, sharing with customers what personal data is being collected and how it is being used. Moreso, we will need to focus product design on giving customers control over their own information by adopting best practices like cookie disclaimers and GDPR compliance.
There’s still a lot to figure out with voice-assisted interfaces, but if the development of IoT platforms follows the path of reinforcing trust, the next decade can hopefully avoid an erosion of privacy and instead bring about its restoration.
Stuart Schechter writes about the security risks of using a password manager. It's a good piece, and nicely discusses the trade-offs around password managers: which one to choose, which passwords to store in it, and so on.
My own Password Safe is mentioned. My particular choices about security and risk is to only store passwords on my computer -- not on my phone -- and not to put anything in the cloud. In my way of thinking, that reduces the risks of a password manager considerably. Yes, there are losses in convenience.
Free service prevents BGP hijackers from fraudulently obtaining browser-trusted certs. //
Cloudflare will be making a programming interface available for free to all certificate authorities. The multipath check for domain control validation consists of two services: agents that perform domain validation out of a specific datacenter, and a domain validation “orchestrator” that handles multipath requests from CAs and dispatches them to a subset of agents.
When a CA wants to ensure a domain validation hasn’t been intercepted, it can send a request to the Cloudflare API that specifies the type of check it wants. The orchestrator then forwards a request to more than 20 randomly selected agents in different datacenters. Each agent performs the domain validation request and forwards the result to the orchestrator, which aggregates what each agent observed and returns the results to the CA. //
Sullivan said Cloudflare is offering the service for free because the company believes that attacks on the certificate authority system harms the security of the entire Internet. He said he expects the use of multipath domain validation to become standard practice, particularly if it’s offered by other large networks.
We use a training/evaluation supplier to both help with exposing staff via simulated Phish and to evaluate submitted suspicious items. We include training to all staff and work to help staff who struggle with properly reporting Phish. The focus is on motivating staff, not punishing. //
We continue the efforts in earnest. Noteworthy is a 1 minute training animation video for Phish, part of a THINK series for Information Security:
Phishing emails are a real threat
So we need your help
Thousands of harmful emails attack us daily
Filters catch most of these Phish
But some still manage to get through the net
This is where YOU come in
Did you know?
If you delete a malicious email
It will be removed from your inbox
But the threat to (our organization) still exists
But if you report the malicious email
You help (our organization) defend against future threats
THINK, you colleagues depend on you
THINK, patients depend on you
THINK before you click. Office of Information Security <<
In its continued pursuit of making the most secure internet-connected vehicles on the road, Tesla is upping the ante of its “bug bounty” program, which encourages security researchers to actively locate and report vulnerabilities on the company’s hardware and online services. As part of the program’s most recent update, Tesla has raised the maximum payout to $15k. //
Just like the previous iteration of its “bug bounty” program, whose maximum payout was listed at $10,000, Tesla assured hackers that vehicles used for security research would not have their warranties voided, provided that the hacking is conducted within parameters allowed by the company. Tesla further noted that if vehicles used by participants in their good-faith security research end up being compromised, the company will take steps to update or “reflash” the hacked electric cars.
Let's recover the passwords for those target service accounts! Because once we have the full credentials, we have admin rights that no SEIM or systems admin will be tracking the use of - these accounts are almost universally ignored, since they login every time those services start (say during a system restart). So if this is for instance a service account with domain or local admin rights that's on every server and workstation, you are now "better than domain admin". You have all the rights, but no system controls are watching you!
Many people (even infosec experts) seem to be confused about what "two-factor authentication" (2FA) really is.
Let me begin with some basics. There are 3 fundamentally different ways in which a human can authenticate themselves to a computer:
- With something they know (password, passphrase, PIN)
- With something they have (e.g., a smartcard)
- With something they are (biometrics - fingerprint, face scan, palm scan, etc.)
Is authenticating via password and SMS send to your phone "2FA"? No, it is not! While the phone itself is "something you have", it is not the phone that performs the authentication. It is you. You learn something (e.g., a number sent by SMS to this phone) and use this "something you now know" for authentication (together with your password). The correct term for this process is not "2FA" but "2SV" - "two-step verification". You use two instances of the same factor ("something you know") but obtained via different ways.
Why is this important? Because authentication based solely on a "something you know" factor(s) is vulnerable to phishing. If the attacker can con you that the page of his that you're visiting is a legitimate login page, they can steal any "something you know" information that you enter for authentication purposes and use it themselves. The only inconvenience (to the attacker) that 2SV introduces is that the second step sent to the phone usually expires soon (in a few minutes) and is different each time, so it cannot be stored and used for a long time in the future. But this isn't really the problem, because the attacker can easily automate the process of token stealing and immediate use.
Now, while most people are aware that SMS-based authentication is insecure because of this (and because of other flaws, like SS7, but let's not get into this right now), they are usually quite surprised to learn that things like Google Authenticator running on your smart phone have exactly the same problem. The aren't 2FA; they are 2SV and are vulnerable to phishing. Hardware tokens that display different numbers at the press of a button (like the RSA Token) suffer from exactly the same problem. Using them is not 2FA; it is 2SV.
AI-powered video technology is becoming ubiquitous, tracking our faces and bodies through stores, offices, and public spaces. In some countries the technology constitutes a powerful new layer of policing and government surveillance.
Fortunately, as some researchers from the Belgian university KU Leuven have just shown, you can often hide from an AI video system with the aid of a simple color printout.
Fool’s errand: The deception demonstrated by the Belgian team exploits what’s known as adversarial machine learning. Most computer vision relies on training a (convolutional) neural network to recognize different things by feeding it examples and tweaking its parameters until it classifies objects correctly. By feeding examples into a trained deep neural net and monitoring the output, it is possible to infer what types of images confuse or fool the system.
Eyes everywhere: The work is significant because AI is increasingly found in everyday surveillance cameras and software. It’s even being used to obviate the need for a checkout line in some experimental stores, including ones operated by Amazon. And in China the technology is emerging as a powerful new means of catching criminals as well as, more troublingly, tracking certain ethnic groups.
Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don't consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous "O-rings," although they considered that by design the O-rings would not be damaged.
The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal -- despite that fact that everyone involved knows better.
I think this is a useful term for IT security professionals.
Android phone as 2-factor security
Someone found Swiss Post's embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world's top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn't work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.
And, as everyone who's ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post's position is that since the bug only allows elections to be stolen by Swiss Post employees, it's not a big deal, because Swiss Post employees wouldn't steal an election.
But when Swiss Post agreed to run the election, they promised an e-voting system based on "zero knowledge" proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn't be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election. //
We don't accept scientific research unless the people who do it show all their work to everyone, publishing data, protocols and analysis in public forums that everyone can critique, even axe-grinding grudge-holders, because, as with whistleblowers, the people with the motivation to really dig into your work and reveal its deficiencies are often people who don't like you and want you to fail, and if we only accept bad news from people with good intentions, we'll miss some of the most important and urgent warnings about flaws that could steal a whole country's government.
PrivateBin is a minimalist, open source online pastebin where the server has zero knowledge of pasted data.
Data is encrypted and decrypted in the browser using 256bit AES in Galois Counter mode.
This is a fork of ZeroBin, originally developed by Sébastien Sauvage. ZeroBin was refactored to allow easier and cleaner extensions. PrivateBin has many more features than the original ZeroBin. It is, however, still fully compatible to the original ZeroBin 0.19 data storage scheme.
What PrivateBin provides
As a server administrator you don't have to worry if your users post content that is considered illegal in your country. You have no knowledge of any of the pastes content. If requested or enforced, you can delete any paste from your system.
Pastebin-like system to store text documents, code samples, etc.
Encryption of data sent to server.
Possibility to set a password which is required to read the paste. It further protects a paste and prevents people stumbling upon your paste's link from being able to read it without the password