The cs.cmu.edu Coke machine was hooked up to a computer by John Zsarnay and/or Lawrence Butcher (now at Xerox PARC); essentially, the six little out-of-product lights on the pushbuttons were monitored. These would flash on for a couple seconds while a particular bottle was dispensed, and of course stay on when a column was empty. They were connected, I believe, to a terminal server machine that was programmed by Mike Kazar to keep track of the time of the last transition (short-term and long-term) for each column. He and Dave Nichols put together a simple Coke@+(TM) protocol by which any machine on the local University-grant Ethernet, and later the Internet as a whole, could probe the current status of the machine; Dave wrote the program that became the ``coke'' command, which printed out the length of time since each column had been totally empty. (The idea, you see, was to notice when a column, having gone empty, was refilled with (warm, room-temperature) Coke, because in principle you wanted to select the coldest Coke available, and thus avoid those colums that had recently been refilled.
In a 6-3 ruling, the Supreme Court just narrowed the scope of the Computer Fraud and Abuse Act:
In a ruling delivered today, the court sided with Van Buren and overturned his 18-month conviction.
In a 37-page opinion written and delivered by Justice Amy Coney Barrett, the court explained that the “exceeds authorized access” language was, indeed, too broad.
Justice Barrett said the clause was effectively making criminals of most US citizens who ever used a work resource to perform unauthorized actions, such as updating a dating profile, checking sports scores, or paying bills at work.
What today’s ruling means is that the CFAA cannot be used to prosecute rogue employees who have legitimate access to work-related resources, which will need to be prosecuted under different charges.
The ruling does not apply to former employees accessing their old work systems because their access has been revoked and they’re not “authorized” to access those systems anymore. //.
Clive Robinson • June 7, 2021 9:43 AM
I’ve already commented on why the law was technically very bad.
But a saliant legal point was that it confused contracts and legislation.
That is it alowed a non legalistive organidation such as a corporation to write a document that had criminal penalties.
That is wrong by any measure.
In the US you get taught that,
A tort arises from a breach of a private duty and a crime arises from a breach of a public duty.
The two should never ever be confused getting on for atleast two millennia of jurisprudence has repeatedly shown that any cross over leads to an escalation of undesirable outcomes and other unintended consequences that easily cascade into what becomes a runaway set of consequences.
But then US legislators have a history going right back to the constirution of at the best antipathy towards democracy right through to ensuring that the citizens have no rights in any form.
The CFAA was an insidious form of the age old game of “Rights Striping” by ensuring a “non equity of arms” which favours those that see themselves as entitled through the holding of property and directly or indirectly other humans as chattels or endentured servitude.
Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.
For three decades, Linus Torvalds has led Linux Kernel development, inspiring countless other developers and open source projects. In 2005, Linus also created Git to help manage the kernel development process, and it has since become the most popular version control system, trusted by countless open source and proprietary projects. //
Regarding creating Git and then handing it off to Junio Hamano to improve and maintain, Linus noted, "I don't want to claim that programming is an art, because it really is mostly just about 'good engineering'. I'm a big believer in Thomas Edison's 'one percent inspiration and ninety-nine percent perspiration' mantra: it's almost all about the little details and the everyday grunt-work. But there is that occasional 'inspiration' part, that 'good taste' thing that is about more than just solving some problem - solving it cleanly and nicely and yes, even beautifully. And Junio had that 'good taste'." //
I very much don't regret the choice of license, because I really do think the GPLv2 is a huge part of why Linux has been successful.
Money really isn't that great of a motivator. It doesn't pull people together. Having a common project, and really feeling that you really can be a full partner in that project, that motivates people, I think. //
I write very little code these days, and haven't for a long time. And when I do write code, the most common situation is that there's some discussion about some particular problem, and I make changes and send them out as a patch mainly as an explanation of a suggested solution. //
Because all my real work is spent on reading and writing emails. It's mostly about communication, not coding. In fact, I consider this kind of communication with journalists and tech bloggers etc to literally be part of my workday - it may get lower priority than actual technical discussions, but I do spend a fair amount of time on things like this too.
Before Microsoft and Intel dominated the PC market with a common platform, the CP/M operating system did something similar for small business machines in the late 1970s and early 1980s—until MS-DOS pulled the rug out from under it. Here’s more about CP/M, and why it lost out to MS-DOS.
In most modern implementations, this means for every 64-bit word stored in RAM, there are eight checking bits. A single bit error—a 0 flipped to 1, or a 1 flipped to 0—can be both detected and corrected automatically. Two bits flipped in the same word can be detected but not corrected. Three or more bits flipped in the same word will probably be detected, but detection is not guaranteed.
Bit flips can happen for many reasons, beginning with cosmic-ray impact or simple hardware failure. A large-scale study of Google servers found that roughly 32 percent of all servers (and 8 percent of all DIMMs) in Google's fleet experience at least one memory error per year. But the vast majority of these are single-bit errors—and since Google is using server CPUs and ECC RAM, this means the machines in question keep right on trucking. //
Even when ECC can't actively prevent a Rowhammer attack from having an impact on the system—for example, when it flips multiple bits in one word—it can at least alert the system of the problem and, in most cases, prevent the Rowhammer attack from doing anything other than causing downtime. (Most ECC systems are configured to halt the entire machine if an uncorrectable error is detected.) //
Torvalds takes the bold position that the lack of ECC RAM in consumer technology is Intel's fault due to the company's policy of artificial market segmentation. Intel has a vested interest in pushing deeper-pocketed businesses toward its more expensive—and profitable—server-grade CPUs rather than letting those entities effectively use the necessarily lower-margin consumer parts.
Removing support for ECC RAM from CPUs that aren't targeted directly at the server world is one of the ways Intel has kept those markets strongly segmented. Torvalds' argument here is that Intel's refusal to support ECC RAM in its consumer-targeted parts—along with its de facto near-monopoly in that space—is the real reason that ECC is nearly unavailable outside the server space.
The story of the first microprocessor, one you may have heard, goes something like this: The Intel 4004 was introduced in late 1971, for use in a calculator. It was a combination of four chips, and it could be programmed to do other things too, like run a cash register or a pinball game. Flexible and inexpensive, the 4004 propelled an entire industry forward; it was the conceptual forefather of the machine upon which you are probably reading this very article.
That’s the canonical sketch. But objects, events, people—they have alternate histories. Their stories can often be told a different way, from a different perspective, or a what could have been.
This is the story, then, of how another first microprocessor, a secret one, came to be—and of my own entwinement with it. The device was designed by a team at a company called Garrett AiResearch on a subcontract for Grumman, the aircraft manufacturer. It was larger, it was a combination of six chips, and it performed crucial functions for the F-14 Tomcat fighter jet, which celebrates the 50th anniversary of its first flight this week. It was called the Central Air Data Computer, and it computed things like altitude and Mach number; it figured out the angle of attack, key to landing and missile targeting; and it controlled the wing sweep, allowing the craft to be both maneuverable when the wings were at about 50 degrees and very, very fast when they were swept all the way back.
Ray Holt was one of the engineers for the Central Air Data Computer. He is probably not someone you have heard of—how could you have? He worked on the project, one of two people doing what’s called the logic design, for two years, between 1968 and 1970, with a team that included his younger brother, Bill. He couldn’t tell anyone about what they had built. The project was kept quiet by the Navy and by Garrett for decades as other engineers were awarded credit for inventing firsts. Later, when he was able to talk about the device, people were skeptical. //
The Intel engineers who share the title told the paper that the Central Air Data Computer was bulky, it was expensive, it wasn’t a general purpose device. One expert said it was not a microprocessor because of how the processing was distributed among the chips. Another—Russell Fish—said it was, noting, “The company that had this technology could have become Intel. It could have accelerated the microprocessor industry at the time by five years." But other people around that time also wanted to claim the title of father of the microprocessor; there were some big patent fights, and not everyone even agrees on the exact definition of a microprocessor in the first place.
“The discussion,” says Fish, who today runs an IP licensing company called Venray, “is not a technical one, it is a philosophical one.” Fish at one point wrote that the 4-bit 4004 could “count to 16,” while the 20-bit CADC “was evaluating sixth order polynomial expressions rapidly enough to move the control surfaces of a dogfighting swing-wing supersonic fighter.” When I spoke to him recently, he said he had gone back and read through the documentation. “What Ray Holt did was absolutely brilliant,” he says. “Particularly given the timeframe. Ray was generations ahead, algorithmically and computationally.”
On a bright fall morning at Stanford, Tom Mullaney is telling me what’s wrong with QWERTY keyboards. Mullaney is not a technologist, nor is he one of those Dvorak keyboard enthusiasts. He’s a historian of modern China and we’re perusing his exhibit of Chinese typewriters and keyboards, the curation of which has led Mullaney to the conclusion that China is rising ahead technologically while the West falls behind, clinging to its QWERTY keyboard.
Now this was and still is an unusual view because Chinese—with its 75,000 individual characters rather than an alphabet—had historically been the language considered incompatible with modern technology. //
But, Mullaney argues, the invention of the computer could turn China’s enormous catalog of characters into an advantage. //
Typing English on a QWERTY computer keyboard, he says, “is about the most basic rudimentary way you can use a keyboard.” You press the “a” key and “a” appears on your screen. “It doesn't make use of a computer’s processing power and memory and the cheapening thereof.” Type “a” on a QWERTY keyboard hooked up to a Chinese computer, on the other hand, and the computer is off anticipating the next characters. Typing in Chinese requires mediation from a layer of software that is obvious to the user.
In other words, to type a Chinese character is essentially to punch in a set of instructions—a code if you will, to retrieve a specific character. Mullaney calls Chinese typists “code conscious.” Dozens of ways to input Chinese now exist, but the Western world mostly remains stuck typing letter-by-letter on a computer keyboard, without taking full advantage of software-augmented shortcuts.
In 2013, Bill Gates admitted ctrl+alt+del was a mistake and blamed IBM. Here’s the story of how the key combination became famous in the first place. //
In the spring of 1981, David Bradley was part of a select team working from a nondescript office building in Boca Raton, Fla. His task: to help build IBM’s new personal computer. Because Apple and RadioShack were already selling small stand-alone computers, the project (code name: Acorn) was a rush job. Instead of the typical three- to five-year turnaround, Acorn had to be completed in a single year. //
In 2001, hundreds of people packed into the San Jose Tech Museum of Innovation to commemorate the 20th anniversary of the IBM PC. In two decades, the company had moved more than 500 million PCs worldwide. After dinner, industry luminaries, including Microsoft chairman Bill Gates, sat down for a panel discussion. But the first question didn’t go to Gates; it went to David Bradley. The programmer, who has always been surprised by how popular those five minutes spent creating ctrl+alt+del made him, was quick to deflect the glory.
“I have to share the credit,” Bradley joked. “I may have invented it, but I think Bill made it famous.”
Turns out that plugging a bunch of computers into our electrical grid that do nothing but draw current and hash through algorithms has had some negative environmental impacts. Recent studies suggest that Bitcoin-related power consumption has reached record highs this year — with more than seven gigawatts of power being pulled in the pursuit of the suspect digital currency. Today’s bitcoin mining operations can be as small as a single user running a dedicated desktop machine to 50,000 state-of-the-art rigs installed in a Kazakhstan warehouse with the goal of hashing through the Bitcoin consensus algorithm faster than your competition in order to maximize the number of block rewards you receive. //
A study from the Cambridge Center for Alternative Finance released on Monday estimates that the global bitcoin mining industry uses 7.46 GW, equivalent to around 63.32 terawatt-hours of energy consumption. The study also notes that miners are paying around $0.03 to $0.05 per kWh this year. Given that a March estimate put the cost to mine a full bitcoin is around $7,500, the average miner still stands to make over $4,000 in profit from the operation. //
The current total amount of processing power dedicated to mining, known as the hashrate, is currently hovering around 120 exahash per second (EH/s). However industry analysts argue that that figure is soon to increase.
“By our assessment, the Bitcoin network can exceed 260EH/s in Hashrate in the next 12–14 months,” according to a July study from Bitooda. “Led by a modest increase in available power capacity from 9.6 to 10.6GW and an upgrade cycle that will replace older generation S9 class rigs with newer S17 and next-generation S19 class rigs.”
Computer keyboards grew out of calculator key assignments, as both are business tools.
The telephone keypad arrangement was derivated in an UI analysis in 1959/60 with 'average' customers - people at the time not really in contact with calculators or even less computers. //
In 1959 Bell did a rather large UI study, published in 1960 as "Human Factors Engineering Studies of the Design and Use of Pushbutton Telephone Sets". Goal was to determinate a layout that would not only work, but as well operate efficient and be enjoyed by users. //
One might speculate that if calculators would have been more wide spread in the 1950 (they were special and expensive business tools at the time) or terminals/computers had already made their way into homes before, the telephone would also work bottom-up ... but that's fooder for alternate history novellas.
How times have changed.
Top photo circa 1989 -- typewriters
bottom photo circa 1998 -- computers
We tested WD Red SMR v CMR drives to see if there was indeed a significant impact with the change. We found SMR can put data at risk 13-16x longer than CMR //
The performance results achieved by the WD Red WD40EFAX surprised me; my only personal experience with SMR drives prior to this point was with Seagate’s Archive line. Based on my time with those drives, I was expecting much poorer results. Instead, individually the WD Red SMR drives Are essentially functional. They work aggressively in the background to mitigate their own limitations. The performance of the drive seemed to recover relatively quickly if given even brief periods of inactivity. For single drive installations, the WD40EFAX will likely function without issue.
However, the WD40EFAX is not a consumer desktop-focused drive. Instead, it is a WD Red drive with NAS branding all over it. When that NAS readiness was put to the test the drive performed spectacularly badly. The RAIDZ results were so poor that, in my mind, they overshadow the otherwise decent performance of the drive. //
The WD40EFAX is demonstrably a worse drive than the CMR based WD40EFRX, and assuming that you have a choice in your purchase the CMR drive is the superior product. Given the significant performance and capability differential between the CMR WD Red and the SMR model, they should be different brands or lines rather than just product numbers. In online product catalogs keeping the same branding means that it shows as a “newer model” at many retailers. Many will simply purchase the newer model expecting it to be better as previous generations have been. That is not a recipe for success.
We tested WD Red SMR v CMR drives to see if there was indeed a significant impact with the change. We found SMR can put data at risk 13-16x longer than CMR
This applet allows you to design a bell tower with up to eight bells, and ring the bells using change ringing. I didn't know what that was, until very recently, so if my explanation of change ringing given below is not very accurate you know why.
You can edit the pitches of the bells, the sharpness of the hammer striking the bell, and the number of modes. The sounds of bells 1-2(B) are synthesized using physical modeling. Some technical details of the synthesis can be found here . No sounds have to be downloaded over the net to get the applet to play bells 1 and 2(B) but bell 3 uses sampled sounds, which amount to about 130K, compared to about 66K for the applet.
The number of "modes" is the number of vibration modes of each bell that are taken into account. Setting this to a large value gives a complex sound but takes more time to compute. Setting it to a low value makes the sound simpler, but computes faster. If you set it to one, you get sinusoidal sounds.
You can chose between two bells, called Bell 1 and Bell 2, 2B or "Sample". The original sounds were analyzed and a model based on those sounds was made. I found the original sounds on the web at http://sunsite.unc.edu/pub/multimedia/sun-sounds/sound_effects (Bell 1, towerclock.au) and http://www.angelfire.com/ca/jamtt/sounds.htm (Bell 2(B), bell.wav). You can hear the original by clicking on the words "Bell" in the first sentence of this paragraph.
When the National Aeronautics and Space Administration came into existence in 1958, the stereotypical computer was the "UNIVAC," a collection of spinning tape drives, noisy printers, and featureless boxes, filling a house-sized room. Expensive to purchase and operate, the giant computer needed a small army of technicians in constant attendance to keep it running. Within a decade and a half, NASA had one of the world's largest collections of such monster computers, scattered in each of its centers. Moreover, to the amazement of anyone who knew the computer field in 1958, NASA also flew computers in orbit, to the moon, and to Mars, the latter machines running unattended for months on end.
US Cyber Command has uploaded North Korean malware samples to the VirusTotal aggregation repository, adding to the malware samples it uploaded in February. //
It's interesting to see the US government take a more aggressive stance on foreign malware. Making samples public, so all the antivirus companies can add them to their scanning systems, is a big deal -- and probably required some complicated declassification maneuvering.Me, I like reading the codenames.
Brief: Don’t throw your old computer just yet. Use a lightweight Linux distro and revive that decades-old system.Why not revive your old computer with Linux? I am going to list the best lightweight Linux distributions that you can use on old computers.
32-bit, Piii or Pentium M/Pro
- Intel Quad-core processor J4105 (14nm) with 4MiB Cache, up to 2.5Ghz(Single Thread) or 2.3Ghz(Multi Thread)
- Dual-channel Memory DDR4-PC19200 (2400MT/s)
- Total 32GiB RAM Space with two SO-DIMM slots
- 4 x PCIe 2.0 for one M.2 NVMe storage
- 2 x Gbit Ethernet ports
- 2 x SATA 3.0
- SSE4.2 accelerator (SMM, FPU, NX, MMX, SSE, SSE2, SSE3, SSSE3, SSE4.1, SSE4.2, AES)
- Intel UHD Graphics 600 (Gen9.5 LP GT1) up to 700Mhz
- HDMI 2.0 and DP 1.2 multiple 4K/60Hz video outputs
- RTC / BIOS backup battery is included
News emerged earlier this week that Western Digital was producing NAS hard drives using SMR technology -- which results in slower performance in some types of applications -- without disclosing that fact to customers in marketing materials or specification sheets. After a bit of continued prodding, storage industry sage Chris Mellor secured statements from both Seagate and Toshiba that confirmed that those companies, too, are selling drives using the slow SMR technology without informing their customers. The latter two even use the tech in hard drives destined for desktop PCs. //
It's important to understand that there are different methods of recording data to a hard drive, and of the productized methods, shingled magnetic recording (SMR) is by far the slowest. //
As such, these drives are mainly intended for write-once-read-many (WORM) applications, like archival and cold data storage, and certainly not as boot drives for mainstream PC users. //
the industry developed SMR to boost hard drive capacity within the same footprint. The tactic revolves around writing data tracks over one another in a 'shingled' arrangement. //
For WD, that consisted of working the SMR models into its WD Red line of drives, but only the lower-capacity 2TB to 6TB models. Slower SMR drives do make some measure of sense in this type of application, provided the NAS is used for bulk data storage. Still, compatibility issues have cropped up in RAID and ZFS applications that users have attributed to the unique performance characteristics of the drives.
Toshiba tells Block and Files that it is also selling SMR drives without listing them on spec sheets, but does so within its P300 series of desktop drives. Seagate also disclosed that it uses the tech in four models, including its Desktop HDD 5TB, without advertising that fact. However, Seagate, like others, does correctly label several of their archival hard drives as using SMR tech, making the lack of disclosure on mainstream models a bit puzzling.