5333 private links
Killed by Google is the Google graveyard; a free and open source list of discontinued Google services, products, devices, and apps. We aim to be a source of factual information about the history surrounding Google's dead projects.
Press inquiries and other assorted death threats?
Throw a knife@killedbygoogle.com.
Count to date: 245
At the end of April, Apple’s introduction of App Tracking Transparency tools shook the advertising industry to its core. iPhone and iPad owners could now stop apps from tracking their behavior and using their data for personalized advertising. Since the new privacy controls launched, almost $10 billion has been wiped from the revenues of Snap, Meta Platform’s Facebook, Twitter, and YouTube.
Now, a similar tool is coming to Google’s Android operating system—although not from Google itself. Privacy-focused tech company DuckDuckGo, which started life as a private search engine, is adding the ability to block hidden trackers to its Android app. The feature, dubbed “App Tracking Protection for Android,” is rolling out in beta from today and aims to mimic Apple’s iOS controls. “The idea is we block this data collection from happening from the apps the trackers don’t own,” says Peter Dolanjski, a director of product at DuckDuckGo. “You should see far fewer creepy ads following you around online.”
The vast majority of apps have third-party trackers tucked away in their code. These trackers monitor your behavior across different apps and help create profiles about you that can include what you buy, demographic data, and other information that can be used to serve you personalized ads. DuckDuckGo says its analysis of popular free Android apps shows more than 96 percent of them contain trackers. Blocking these trackers means Facebook and Google, whose trackers are some of the most prominent, can’t send data back to the mothership—neither will the dozens of advertising networks you’ve never heard of.
From the tech reporting sector, Google is taking blow after blow after blow. There are massive security vulnerabilities in what they offer you and I every day, and those privacy issues are going to start becoming enough of an issue to make the government look a little more closely at them.
So how do they keep the government from looking at them? They announce something that they know enough people in government will like and take credit for their “brave” stance and business policy. //
The latest tracking nightmare for Chrome users comes in two parts. First, Google has ignored security warnings and launched a new Chrome API to detect and report when you’re “idle,” i.e., not actively using your device. Apple warns “this is an obvious privacy concern,” and Mozilla that it’s “too tempting an opportunity for surveillance.” //
they are also banking on Congress and its ongoing fascination with trying to regulate Facebook that they can keep a low profile on all this privacy stuff and not have to worry about a Congressional investigation.
So that’s why they are choosing right now to go after so-called “climate deniers.” They are shifting the focus away from them at a time when it’s very easy to distract their users and Congress. But Google is going to find itself in increasing trouble before too long, and Congress had better start looking deeper into these security issues because Google is going to cause an insane amount of identity theft before too long.
Earlier this year Chrome developers decided that the browser should no longer support JavaScript dialogs and alert windows when they're called by third-party iframes. //
When the web developer community finds out Google is going to break a ton of websites through a tweet, you know communication has failed. But there was a follow-up tweet that's actually far more disturbing than the news of alert() disappearing.
The tweet comes from Chrome software engineer and manager Emily Stark, who is of course speaking for herself, not Chrome, but it seems safe to assume that this thinking is prevalent at Google. She writes: "Breaking changes happen often on the web, and as a developer it's good practice to test against early release channels of major browsers to learn about any compatibility issues upfront." //
First, she is flat out wrong – breaking changes happen very rarely on the web and, as noted, there is a process for making sure they go smoothly and are worth the "cost" of breaking things. But second, and far more disturbing, is the notion that web developers should be continually testing their websites against early releases of major browsers. //
Web developer and advocate Jeremy Keith points out something else that's wrong with this idea. "There was an unspoken assumption that the web is built by professional web developers," he writes. "That gave me a cold chill."
What's chilling about the assumption is just that, it's assumed. The idea that there might be someone sitting right now writing their first tentative lines of HTML so that they can launch a webpage dedicated to ostriches is not even considered.
What we are forced to assume in turn is that Chrome is built by the professional developers working for an ad agency with the primary goal of building a web browser that serves the needs of other professional developers working for the ad agency's prospective clients. //
As Keith points out, this assumption that everyone is a professional fits the currently popular narrative of web development, which is that "web development has become more complex; so complex, in fact, that only an elite priesthood are capable of making websites today."
That is, as Keith puts it, "absolute bollocks."
Don’t have an SSL Certificate? Google is going to flag your website this year!
We turn to the internet for everything. From selling to buying,
With this dominating trend, online security has become a necessity.
Undoubtedly, Google loves its users and therefore, is coming up with every possible way to make us feel secure here on the internet.
With its recent announcement, earlier this year, Google will flag all the unencrypted internet by the end of 2017.
What?
This website is for when you try to open Facebook, Google, Amazon, etc on a wifi network, and nothing happens. Type "http://neverssl.com" into your browser's url bar, and you'll be able to log on.
Why?
Normally, that's a bad idea. You should always use SSL and secure encryption when possible. In fact, it's such a bad idea that most websites are now using https by default.
And that's great, but it also means that if you're relying on poorly-behaved wifi networks, it can be hard to get online. Secure browsers and websites using https make it impossible for those wifi networks to send you to a login or payment page. Basically, those networks can't tap into your connection just like attackers can't. Modern browsers are so good that they can remember when a website supports encryption and even if you type in the website name, they'll use https.
And if the network never redirects you to this page, well as you can see, you're not missing much.
HTTPS is now free, easy and increasingly ubiquitous. It's also now required if you don't want Google Chrome flagging the site as "Not secure". Yet still, many of the world's largest websites continue to serve content over unencrypted connections, putting users at risk even when no sensitive data is involved.
For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.
- I’m a teapot.
The requested entity body is short and stout.
Tip me over and pour me out.
How do we know Google knows they stole Android from Oracle-Java?
The E-Mail That Google Really Doesn’t Want A Jury To See:
“Lawyers defending Google against a patent and copyright lawsuit brought by Oracle are trying desperately to keep a particular engineer’s e-mail out of the public eye-but it looks like they’re unlikely to succeed.
“The e-mail, from Google engineer Tim Lindholm to the head of Google’s Android division, Andy Rubin, recommends that Google negotiate for a license to Java rather than pick an alternative system….
“The second paragraph of the email reads:
“‘What we’ve actually been asked to do by Larry [Page] and Sergey [Brin] (Google’s founders) is to investigate what technical alternatives exist to Java for Android and Chrome.
“We’ve been over a bunch of these and think they all suck. We conclude that we need to negotiate a license for Java under the terms we need.’”
Except Google never did negotiate for Android “a license for Java under the terms we need.”
But they released Android anyway.
That’s not legal.
I welcomed Google’s friendly advice for a few days, but after a while, I would be caught entirely off guard by it. When focusing deeply on a task in complete silence, my Hub would abruptly shout at me and scare the crap out of me, causing my heart to skip a beat. Not only did it break my peace, but it caused me to lose my train of thought as well. When this happens several times a day and every single day for weeks, you realize that it’s more of a nuisance than a help, I promise. //
The worst part is that since the feature is called ‘Workday Reminders’, and since Google Assistant has about a gazillion settings packed into one wall of text, I kept searching for the settings to disable them in various places other than Routines. I checked Reminders (because duh), I checked my Nest Hub settings, and so on all to no avail. After a few weeks of torture, and having only attained peace by unplugging my Assistant devices, I finally located the Workday ‘routine’ blandly mixed in with the other routines. No longer did it feature the colorful icon it had when I set them up – instead, it was a simple blue outlined icon mixed in with the rest. It was completely indistinguishable. I really am beginning to see what people are saying about Google’s design becoming less attention-grabbing and more ‘same-y’.
Damore launched a lawsuit against Google that highlighted its culture of ideological exclusion and favoritism based on race and gender, a claim that was later found to be true, proving Damore correct. Google was actually paying women more than men for the same job.
Damore’s crime was pointing out the obvious and true, if not inconvenient to the mainstream narrative. Bobb’s apparent non-crime was accusing an entire race of people of being viscious warmongers.
Google seems to have a very upside-down view on what you can fire people for. State science and you’ll be fired. Be racist against the rite people and you’ll just be relocated to a different part of it.
While a decision in Google v. Oracle isn’t expected for a few months, the justices’ pointed questioning at the Big Tech giant indicates Google broke the law to get ahead. //
The Supreme Court heard oral arguments for Google v. Oracle on Oct. 7. The case involves several legal issues, all of which boil down to one principal question: Did Google cheat and steal its way to the top? While a decision on the case isn’t expected for a few months, the justices’ pointed questioning at the Big Tech giant points to the answer being a clear and resounding yes.
For Google, not buying WhatsApp in 2013 feels like a major turning point. Google would go on to launch seven competing messaging and video apps over the years: Google Hangouts in 2013; Google Spaces, Google Allo, and Google Duo in 2016; and Google Chat and Google Meet in 2017. The company also pushed for RCS over Google Messages in 2019. Cue's prediction that the company could "lose" to a Google-led WhatsApp now seems like a dream from a bygone era.
Cue also called messaging "one of the most important apps in a mobile environment," which represents a striking difference from how Google approaches messaging. At Google, messaging is only ever handled by an endless series of underfunded, unstable side projects led by job-hopping project managers. Google releases a new messaging app about every 12-18 months, making it very difficult for any single app to gain traction and reducing consumer confidence in any individual product. The heads of these projects often leave the company shortly after a splashy product launch, and with no top-down direction on what the company should support, the products usually start winding down once the leader bails.
Federighi's comments echo Apple's longstanding position that iMessage is a key lock-in component of Apple's walled garden and that the company shouldn't make it easy for "iPhone families" to incorporate Android devices. The Epic case earlier revealed a 2016 comment from Apple's Phil Schiller, saying that "moving iMessage to Android will hurt us more than help us."
The term “monopoly rent” might be jargon, but the concept it describes is intuitive. Don’t underestimate the power of a simple argument. Google’s and Apple’s alleged rent-collecting days might be numbered.
Last week, Google pulled a video of DeSantis on March 18 discussing COVID-19 with medical scientists Dr. Jay Bhattacharya, Dr. Sunetra Gupta, Dr. Martin Kulldorff, and Dr. Scott Atlas, who all hail from elite institutions — Stanford University, Harvard University, and Oxford University. All but Gupta, who is based in the United Kingdom, also joined DeSantis’s April 12 press conference to respond to Google’s ban.
“For science to work, you have to have an open exchange of ideas,” Bhattacharya said Monday. “If you’re going to make an argument that something is misinformation, you should provide an actual argument. You can’t just take it down and say, ‘Oh, it’s misinformation’ without actually giving a reason. And saying, ‘Look it disagrees with the CDC’ is not enough of a reason. Let’s hear the argument, let’s see the evidence that YouTube used to decide it was misinformation. Let’s have a debate. Science works best when we have an open debate.”
“I’m very worried about the future of science because science is dependent on free exchange of ideas and it has been for 300 years now. So if this continues, this kind of attitude, the censoring of scientific views, then I think we have reached the end of 200 years of Enlightenment,” Kulldorff said Monday. //
“The lockdowns are the single biggest public health mistake in history,” Bhattacharya said on the banned March 18 panel. He said lockdowns are psychologically compelling to rich societies terrified of death, but are not only ineffective at stopping disease and death, they also make both worse. He noted a few minutes later:
The international evidence and the American evidence is clear: The lockdowns have not stopped the spread of the disease in any measurable way. The disease spreads on aerosol by droplets, it’s a respiratory disease. It’s very difficult to stop. The idea of the lockdown is incredibly beguiling… but humans are not like that. What’s happened instead, we’ve exposed working class, we’ve exposed poor people at higher rates. We’ve created this illusion that we can control disease spread when in fact we cannot. //
Lockdowns Are Bad for People, But Good for Google
Keeping people at home indefinitely has also drastically increased people’s screen time, which provides Google more ad revenue and influence over how people think and the information they receive. Screen use is correlated with obesity, and obesity puts people at a dramatically higher risk from COVID-19.
According to the U.S. Centers for Disease Control, 78 percent of those hospitalized with COVID were obese, and lockdowns have directly contributed to a huge increase in First World obesity, especially among children. Among people who have died while COVID-positive, according to the CDC, 94 percent had other significant medical conditions, including diseases exacerbated by obesity: diabetes, cardiac arrest, and heart failure.
“The laptop class, they have protected themselves through the lockdowns while we have thrown the working class under the bus,” Kulldorff said during the panel discussion Google banned. //
On Monday, DeSantis noted the irony of Google banning professional discussion from doctors whose scientific research has been cited by Google Scholar more than 10,000 times, more than 17,000 times, and more than 25,000 times, respectively. //
Three of the four doctors on the March 18 panel authored the Great Barrington Declaration, a statement now signed by nearly 14,000 medical scientists and more than 42,000 health practitioners, as well as nearly 800,000 “concerned citizens,” that promotes based on the scientific evidence the policy of focused protection in response to COVID, instead of ineffective mass lockdowns.
xoaArs Tribunus Angusticlaviuset Subscriptorreply2 days agoReader Favignore user
Wheels Of Confusion wrote:
Break Up Google.
No, this is stupid. The antitrust sledgehammer is a very bad choice vs actual decent consumer protection regulation. Merely having companies be smaller doesn't actually solve anything, lots of small companies are plenty nasty and that somebody can go elsewhere in theory doesn't necessarily make it any easier. In fact the "break them up so easy lol" meme is so dumb that it honestly has had me wondering if it got started as a false-flag by those opposed to any regulation at all, since it short circuits everything else. There are a lot of scalpels to try first that would be really valuable. Amongst them in no particular order (and not apply at all just to Google either):
Require read-only data access/export for a period of time. Services need to and should be able to refuse to do further business with somebody (and at scale given abuse, this necessarily requires some level of automation). But banning someone should not mean they lose any of their data, and it shouldn't require them to do any work in advance either. Unless it's due to a court order wrt illegal material, companies should be required to have a 6 month window say following a ban to allow someone to get everything. Making sure someone can get everything out would go a long way towards fixing effects and incentives.
Require paid access to a person with review powers. Human review is expensive, but if someone is willing to pay for it they should be able to get it.
Require purchase-time choice for hardware buyers to add their own certs to hardware and/or software roots. There are good reasons for App Stores and certified hardware chains and people should be able to roll with those if they want. But there are real risks too, and a legal requirement for an opt-out would be an easy requirement as a release valve.
Enshrine the notion of software ownership. Account bans and DRM should never result in the loss of purchased software, simple as that. Ongoing use of services sure, but only going forward.
No tying free OS/security updates to accounts. Doesn't need any explanation, no account should be required to receive free updates that are required for continuing functionality.
Basic warranties that match buyer expectations. The "lol 1 year but you can buy more" thing is bad. People have a rough expectation of "how long something should last" in proportion to its cost, and the price should internalize the failure rate and repair cost rather than externalize it onto an unlucky few. Maybe 1 month warranty per $20 retail up to 5 years, maybe some other formula, but while it should be up to manufacturers to figure out how to meet the goal of basic reliability, meet it they should. Lots of other issues are covered by this ("right to repair" which is a bad way). Extended warranties should only be for things like premium turnaround, enterprise level beyond EOL coverage, etc.
Spell out SLAs in standardized way, even for consumer level. Maybe the paid pro version has a 99.9% uptime guarantee and 12-hour response time etc, while in practice the consumer version only promises 95% and "here's our FAQ, or you can pay $200 per incident to talk to someone", but the latter should still be something people can understand upfront before committing.
And more, but these would be good starts. Big Companies and Big Tech in particular provide major, major benefits. They are also convenient, concentrated targets for careful, focused experiments in regulation. There is no reason we can't try to have the best of both worlds. "Break them up" is not just short sighted but lazy. We should try to make things better for everyone.
CloudReady could be the perfect answer for those continuing to stick with, say, Windows 7 PCs – despite Microsoft having stopped delivering free patches since January 15. Importantly, CloudReady still gets patches and fixes just like Chrome OS, served up every six weeks. //
The Home edition is free, does not integrate with Google Admin Console and lacks technical support. The education subscription costs $20 per year per device with one year of technical support, while the enterprise option costs $49 per year per device. The paid-for options include one year of technical support. //
There are two notable catches with switching to CloudReady. After installing it, there's no returning to Windows, so users should make a backup of their files.
Also, after transitioning, accessing a local drive becomes difficult because this Chrome-OS variant relies almost entirely on Google Drive for storage.
The public showing of the redesigned Google Pay is less than a week old, but already there are important insights into how it could reshape the commerce landscape and the players operating within it.