Grace Hopper was a phenomenon. She earned a doctorate in mathematics from Yale, was a professor at Vassar, and left the U.S. Navy with the rank of rear admiral. Her contributions to the field of computing can be judged by the number of foundations and programs that have been created in her memory. //
Driven to create a programming language closer to English than the machine-code computers understand, Hopper developed the first compiler. This opened the door for the first compiled languages, such as FLOW-MATIC. This earned her a seat on the Conference/Committee on Data Systems Languages (CODASYL) of 1959.
She was also instrumental in the specification and development of the Common Business-Oriented Language (COBOL). The first meeting took place on June 23, 1959, and its report and specification of the COBOL language followed in April 1960.
COBOL contained some groundbreaking concepts. Arguably, the most significant of these was the ability to run on hardware produced by different manufacturers, which was unprecedented at the time.
The language was elaborate and provided a near-English vocabulary for programmers to work with. It was designed to handle huge volumes of data and to be exceptionally mathematically accurate.
Its vocabulary of reserved words (the words that make up the language) runs close to 400. A programmer strings these reserved words together so they make syntactical sense and create a program.
Any programmer who’s familiar with other languages will tell you 400 is an incredible number of reserved words. For comparison, the C language has 32, and Python has 33. //
As clunky as it might seem today, COBOL was revolutionary when it launched. It found favor within the financial sector, federal government, and major corporations and organizations. This was due to its scalability, batch handling capabilities, and mathematical precision. It was installed in mainframes all over the world, took root, and flourished. Like a stubborn weed, it just won’t die.
Our dependency on systems that still run on COBOL is astonishing. A report from Reuters in 2017 shared the following jaw-dropping statistics:
- There are 220 billion lines of COBOL code still in use today.
- COBOL is the foundation of 43 percent of all banking systems.
- Systems powered by COBOL handle $3 trillion of daily commerce.
- COBOL handles 95 percent of all ATM card-swipes.
- COBOL makes 80 percent of all in-person credit card transactions possible. //
The programmers who know COBOL are either retired, thinking about retiring, or dead. We’re steadily losing the people who have the skills to keep these vital systems up and running. New, younger programmers don’t know COBOL. Most also don’t want to work on systems for which you have to maintain ancient code or write new code.
This is such a problem that Bill Hinshaw, a COBOL veteran, was coerced out of retirement to found COBOL Cowboys. This private consulting firm caters to desperate corporate clients that can’t find COBOL-savvy coders anywhere. The “youngsters” at COBOL Cowboys (the motto of which is “Not Our First Rodeo”) are in their 50s. They believe 90 percent of Fortune 500 business systems run on COBOL. //
This is a widespread and deeply embedded problem. A 2016 report from the Government Accountability Office listed COBOL systems running on mainframes up to 53-years-old. These include systems used to process data related to the Department of Veterans Affairs, The Department of Justice, and the Social Security Administration. //
IDENTIFICATION DIVISION.
PROGRAM-ID. Hello-World.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
PROCEDURE DIVISION.
MAIN-PROCEDURE.
DISPLAY "Hello world, from How-To Geek!"
STOP RUN.
END PROGRAM Hello-World.Folding@Home had settled into a low-profile niche. Then came COVID-19. //
Then in February, everything changed. Folding@Home suddenly went from 30,000 volunteers running the software in February to 400,000 in March—another 300,000 users came on board after that. There were so many users that the database ran out of potential simulations for them to crunch, and data coming in was so great that the servers were overloaded, said Bowman.
Despite these glitches, F@H zoomed to a peak performance of 1.5 exaFLOPs, making it more than seven times faster than the world's fastest supercomputer, Summit, at the Oak Ridge National Laboratory.
What caused this? For starters, interest in finding a therapy for COVID-19 helped. SETI@Home announcing the end of its project on March 31 also meant tens of thousands of people were looking for something new to run on their PCs. But the big boost came March 13, when Nvidia tweeted out a call to arms. //
With just six servers at Washington University and partner sites at Sloan Kettering and Temple University, so much data was coming back and being written to disk that F@H stopped sending out work units. New servers have since helped them catch up. It did put a pause on the F@H team’s main project, a significant rewrite and update of the client app. That’s on hold for now, Bowman said.
This article was originally published in the July 1945 issue of The Atlantic Monthly. It is reproduced here with their permission.
As Director of the Office of Scientific Research and Development, Dr. Vannevar Bush has coördinated the activities of some six thousand leading American scientists in the application of science to warfare. In this significant article he holds up an incentive for scientists when the fighting has ceased. He urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge. For many years inventions have extended man's physical powers rather than the powers of his mind. Trip hammers that multiply the fists, microscopes that sharpen the eye, and engines of destruction and detection are new results, but the end results, of modern science. Now, says Dr. Bush, instruments are at hand which, if properly developed, will give man access to and command over the inherited knowledge of the ages. The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work. Like Emerson's famous address of 1837 on ``The American Scholar,'' this paper by Dr. Bush calls for a new relationship between thinking man and the sum of our knowledge.
- The Editor
I just saw a Western Digital external hard drive special: a 12TB desktop drive for $187. How do drive vendors do it?
This is a completely free system that offers you flexibility and choice without any long-termcommitments. Designed to increase access to online learning provision, the site offers an ever growing list of course titles covering many topics.
You'll do your learning online so all you need is access to the internet. This means you can learn at a time, place and pace to suit you. //
Your e-mail address is for us to inform you about your training, the costs associated with the delivery of this training site are met by on page advertising only. We will not use your e-mail to generate any further income.
How do I take a screenshot?
Android
iOS
Windows
Mac
Chrome OS
Linux
Websites
Clive Robinson • October 14, 2019 5:20 AM
@ ,
With regards the wired article you will find,
As dangerous as their invention sounds for the future of computer security, the Michigan researchers insist that their intention is to prevent such undetectable hardware backdoors, not to enable them. They say it's very possible, in fact, that governments around the world may have already thought of their analog attack method.
Only it's not "governments" it was people on this blog quite some years back. Have a search for @RobertT and "capacitance" he described some much cleverer varients, with @Nick P and myself.
But also you will find in the article,
"Detecting this with current techniques would be very, very challenging if not impossible," says Todd Austin, one of the computer science professors at the University of Michigan who led the research. "It's a needle in a mountain-sized haystack." Or as Google engineer Yonatan Zunger wrote after reading the paper: "This is the most demonically clever computer security attack I've seen in years."
Actually it's not that clever when you think aboit it, any student who has ever played with an NE555 timer as a retrigerable monostable used in many circuits will have used a capacitor as an integrator to triger a level change in a logic circuit. It's the repurposing of an old idea in a new way that makes them think "It's bleeding obvious... Why didn't I think of that" it's a sign that the idea has come of age in a broader market place.
But Todd Austin is wrong about detecting it, it is actually quite easy to spot, and I've said as much and described in some detail how to do it on this blog and other places some years ago now...
The first thing to keep in mind is that in the French language the same word means both safety and security. Thus the French way of thinking does not distinguish the ideas into unrelated domains as much as do those in the English language way of thinking[1].
The big problem with computer security is we "build pyramids not boats". Our thinking is skewed to believe that you can only build on secure foundations. It's not true, boats for millennia have got along fine without any foundations, and the water they float on is in no way stable or secure. A moder side view of this was Elon Musk and his landing barge for rockets, atleast in his case he could point at aircraft cariers to show he was not mad.
What if we decide not to have our compiter design process be one of Castles on bed rock, but warships on water? The English Tudor king Henry VIII found he could build a navey and thus set England on a course to become the worlds formost maritime nation and build an empire that covered the globe.
That is there are great possabilities in thinking mobile castles. Leonardo De Vinchi, drew up designs for such things, but his idea did not realy become part of military thinking during WWI with the invention of the armoured car that became the tank. Which again opened up significant possabilities and changed the face of land based warfare for ever.
Ask your self are there ways we could use a mechanism thought as for safety to one we can use for security?
The answer is look in the area of reliability. Unreliable systems are either "not dependable" or "dependable for a limited time". New York Telephone realised that if you could monitor an unreliable system and detect when it was going wrong and switch it out rapidly for a working system then you could keep a circuit in operation whilst you replaced the defective component. Thus the idea of fault tolerant systems began to be used.
The problem was detecting when a unit was starting to fail, eventually this gave rise to the idea of "voting systems" which NASA did not invent but certainly made famous.
Some years ago now I realised that redundant fault tolerant systems were infact "boats" from the security aspect, and that "fault" also covered malware. That is an idea for Safety works just as well for Security, to which some might rightly say "but of course, why would you think not".
It became a small but essential part of my "Castles-v-Prisons" idea which you can search for on this blog to find conversations about it.
The problem thus has a known solution...
Thus the question now is who takes on the Sisyphean task of pushing the idea over the group think mental entropy hump?
As I've noted over the years a great many ideas on this blog are discussed and solutions possed several years prior to both industry and academia even realising they should be looking at them, as for Governments, you hear that squeaky noise way way behind, that's the wheel they are to busy greasing with pork fat rather than replacing. Because they are still doing things the way their Grandpapy did, because in their conservative view "What was good enough for Grandpa, is good enough for me" (mind you Grandpapy was pretty quick at grabbing brown envelops behind his back ;-)
[1] A point I've made before, is that our primary language we learn when very young befor we are two forms the way we think. There is evidence of this with "tone deafness" and language where languages such as some Asian ones that depend on pitch to convey infomation. Speakers of such languages are considerably more likely to be "pitch perfect" across the population. It's why I think the fact that the number of native languages decreasing is actually harming the world by reducing the number of different ways people see and think about the world.
From the archives: IBM doesn't make consumer desktop OSes anymore for a reason.
While the technical definitions for computer virus, worm, and malware might have a little overlap, it’s generally accepted that the first type of computer “virus” occurred in 1971 on ARPANET, the scientific/military network that preceded the modern internet. Creeper was an experimental self-replicating program that infected DEC computers across the network.
Written by Bob Thomas at BBN Technologies, Creeper propagated itself throughout ARPANET by exploiting a vulnerability in DEC PDP-10 computers running the TENEX operating system. The worm wasn’t malicious and, upon gaining access to a machine and replicating itself, broadcast “I’m the creeper, catch me if you can!” on the terminal screen. The first virus removal program, dubbed The Reaper, soon followed, designed to ferret out Creeper infections and tidy up.
From what I've seen, m$ is far more heavy handed than IBM ever was,
and DEC never came close to either.
Have to agree, but I think both IBM and MSFT build unnecessary
complexity into their products to forestall efforts of competitors to
duplicate their products.
On this topic I was intrigued by the new opcodes IBM introduced in
1978.
Before then, every 4-byte instruction had the form xxxxBDDD
and every 6-byte instruction had the form xxxxBDDDBDDD
where BDDD had a consistent interpretation.
The MVS/SE instructions introduced in 1978 deviated from that
format, for no particularly good reason. At the time I wondered
if that was deliberate, hoping that such a redefinition would be
tedious and expensive for 370-compatible manufacturers, like
Amdahl, to adapt to.
Was it?
James Dow Allen
In the 1960s, Bob Taylor worked at the heart of the Pentagon in Washington DC. He was on the third floor, near the US defence secretary and the boss of the Advanced Research Projects Agency (Arpa).
Arpa had been founded early in 1958 but was quickly eclipsed by Nasa, leading Aviation Week magazine to dismiss it as "a dead cat hanging in the fruit closet".
Nevertheless, Arpa muddled on - and in 1966, Taylor and Arpa were about to plant the seed of something big.
Next to his office was the terminal room, a pokey little space where three remote-access terminals with three different keyboards sat side by side.
Each allowed Taylor to issue commands to a far-away mainframe computer.
One was based at Massachusetts Institute of Technology (MIT), more than 700km (450 miles) up the coast.
The other two were on the other side of the country - one at the University of California and the Strategic Air Command mainframe in Santa Monica, called the AN/FSQ32XD1A, or Q32 for short.
Each of these massive computers required a different login procedure and programming language.
The next step was obvious, Taylor said. "We ought to find a way to connect all these different machines."
Taylor talked to Arpa's boss, Charles Herzfeld, about his goal.
"We already know how to do it," he said, although it was not clear that anyone really did know how to connect together a nationwide network of mainframe computers.
"Great idea," said Herzfeld. "Get it going. You've got $1m more in your budget right now. Go."
The meeting had taken 20 minutes.
Taylor, Roberts and their fellow networking visionaries had something much more ambitious in mind - a network to which any computer could connect.
As Roberts put it at the time, "almost every conceivable item of computer hardware and software will be in the network".
The solution was proposed by another computing pioneer, physicist Wesley Clark.
Clark had been following the emergence of a new breed of computer.
The minicomputer was modest and inexpensive compared with the room-sized mainframes installed in universities across the United States.
Clark suggested installing a minicomputer at every site on this new network.
The local mainframe - the hulking Q-32, for example - would talk to the minicomputer sitting close beside it.
The minicomputer would then take responsibility for talking to all the other minicomputers on the network - and for the new-and-interesting problem of moving packets of data reliably around the network until they reached their destination.
All the minicomputers would run in the same way - and if you wrote a networking program for one of them, it would work on them all.
Adam Smith, the father of economics, would have been proud of the way Clark was taking advantage of specialisation and the division of labour - perhaps his defining idea.
The existing mainframes would keep on doing what they already did well.
The new minicomputers would be optimised to reliably handle the networking without breaking down.
Each local mainframe had to be programmed merely to talk to the little black box beside it - the local minicomputer.
If you could do that, you could talk to the entire network that stood behind it.
The little black boxes were actually large and battleship grey.
They were called Interface Message Processors (IMPs).
The IMPs were customised versions of Honeywell minicomputers, which were the size of refrigerators and weighed more than 400kg (63 stone) apiece.
They cost $80,000 each, more than $500,000 (£405,000) in today's money.
The network designers wanted message processors that would sit quietly, with minimal supervision, and just keep on working, come heat or cold, vibration or power surge, mildew, mice, or - most dangerous of all - curious graduate students with screwdrivers.
Military-grade Honeywell computers seemed like the ideal starting point, although their armour plating may have been overkill.
On 29 October 1969, two mainframe computers exchanged their first word through their companion IMPs.
It was, somewhat biblically: "Lo".
The operator had been trying to type: "Login" and the network had collapsed after two letters.
A stuttering start - but the Arpanet had been switched on.
Photo illustration by Lisa Larson-Walker. Photo by Bettmann/Getty Images.
Cover Story
Future Tense
The Lines of Code That Changed Everything
Apollo 11, the JPEG, the first pop-up ad, and 33 other bits of software that have transformed our world.
Oct 14, 20198:00 PM
Recently in Future Tense
The Chintzy Way Zappos Wants to Compensate Victims of a 2012 Data Breach
In Defense of Movies’ Ham-Handed Portrayals of Computer Code
Hong Kong May Be the Battleground for a New Cyber Cold War
Watching This Decomposing Whale Carcass Is Trippy and Beautiful
Back in 2009, Facebook launched a world-changing piece of code—the “like” button. “Like” was the brainchild of several programmers and designers, including Leah Pearlman and Justin Rosenstein. They’d hypothesized that Facebook users were often too busy to leave comments on their friends’ posts—but if there were a simple button to push, boom: It would unlock a ton of uplifting affirmations. “Friends could validate each other with that much more frequency and ease,” as Pearlman later said.
It worked—maybe a little too well. By making “like” a frictionless gesture, by 2012 we’d mashed it more than 1 trillion times, and it really did unlock a flood of validation. But it had unsettling side effects, too. We’d post a photo, then sit there refreshing the page anxiously, waiting for the “likes” to increase. We’d wonder why someone else was getting more likes. So we began amping up the voltage in our daily online behavior: trying to be funnier, more caustic, more glamorous, more extreme.
Code shapes our lives. As the venture capitalist Marc Andreessen has written, “software is eating the world,” though at this point it’s probably more accurate to say software is digesting it.
Culturally, code exists in a nether zone. We can feel its gnostic effects on our everyday reality, but we rarely see it, and it’s quite inscrutable to non-initiates. (The folks in Silicon Valley like it that way; it helps them self-mythologize as wizards.) We construct top-10 lists for movies, games, TV—pieces of work that shape our souls. But we don’t sit around compiling lists of the world’s most consequential bits of code, even though they arguably inform the zeitgeist just as much.
So Slate decided to do precisely that. To shed light on the software that has tilted the world on its axis, the editors polled computer scientists, software developers, historians, policymakers, and journalists. They were asked to pick: Which pieces of code had a huge influence? Which ones warped our lives? About 75 responded with all sorts of ideas, and Slate has selected 36. It’s not a comprehensive list—it couldn’t be, given the massive welter of influential code that’s been written. (One fave of mine that didn’t make the cut: “Quicksort”! Or maybe Ada Lovelace’s Bernoulli algorithm.) Like all lists, it’s meant to provoke thought—to help us ponder anew how code undergirds our lives and how decisions made by programmers ripple into the future.
There’s code you’ve probably heard of, like HTML. Other code is powerful (like Monte Carlo simulations, which is used to model probabilities) but totally foreign to civilians. Some contain deadly mistakes, like the flaw in the Boeing 737 Max. And some are flat-out creepy, like the tracking pixel that lets marketers know whether you’ve opened an email.
One clear trend illustrated here: The most consequential code often creates new behaviors by removing friction. When software makes it easier to do something, we do more of it. The 1988 code that first created “Internet Relay Chat” allowed the denizens of the early internet to text-chat with one another in real time. Now real-time text is everywhere, from eye-glazingly infinite workplace Slack confabs to the riot of trolling and countertrolling in a Twitch livestream.
It’s not always clear at first when some code will become epoch-defining. Oftentimes it starts off as a weird experiment, a trial balloon. Back in 1961, Spacewar!, the first virally popular video game, might have seemed a pretty frivolous way to use a cabinet-size computer that cost, at the time, $120,000. (That’s more than $1 million in 2019 dollars.) But it pioneered many of the concepts that helped computers go mainstream: representing data as icons, allowing users to manipulate those icons with handheld controllers.
Code’s effects can surprise everyone, including the coders. —Clive Thompson, author of Coders: The Making of a New Tribe and the Remaking of the World
The Great Wall of China superimposed with #### code.
Photo illustration by Lisa Larson-Walker. Photo by Alex Adams/Getty Images.
Binary Punch Cards
Date: 1725
The first code
Binary programming long predates what we think of as computers. Basile Bouchon is believed to be the first person to punch holes into paper and use it to control a machine: In 1725, he invented a loom that wove its patterns based on the instructions provided in the perforated paper it was fed. A punched hole is the “one,” and the absence of a punched hole is the “zero.” As much as things have changed since then, the essential building block of code has not. —Elena Botella, Slate
The First Modern Code Executed
Date: 1948
Ushered in both the use of computer code and the computer models of nuclear devastation that shaped the Cold War arms race
The Electrical Numerical Integrator and Computer was the first programmable electronic computer. Completed in 1945, it was configured for each new problem by wiring connections between its many components. When one task, such as an addition, finished, a pulse triggered the next. But a few years later, Klára Dán von Neumann and Los Alamos scientist Nicholas Metropolis wired ENIAC to run the first modern code ever executed on any computer: hundreds of numerical instructions executed from an addressable read-only memory (ENIAC’s function table switches). They simulated the explosion of several atomic bomb designs being evaluated at Los Alamos National Lab in New Mexico, using the Monte Carlo technique by which a complex system is simulated, step by virtual step, to repeatedly map the probability distribution of possible outcomes. Von Neumann and Metropolis sent more than 20,000 cards back to the nuclear scientists at Los Alamos, tracing the progress of simulated neutrons through detonating warheads. The distant descendants of this code are still in use at Los Alamos today. —Thomas Haigh, co-author of ENIAC in Action: Making and Remaking the Modern Computer
Grace Hopper’s Compiler
Date: 1952
Made it possible for computers to process words
IF END OF DATA GO TO OPERATION 14 .
Wikipedia
Grace Hopper was programming an early computer when she decided to make the whole thing easier by rooting it in human language. Hopper, who enlisted in the US Naval Reserve during World War II, knew that people like her superiors in the military struggled to understand binary code. If programming languages could be English-based, the work would be less prone to errors and more accessible to those who didn’t have a Ph.D. in mathematics.
Some scoffed at the idea, but by the early 1950s she had devised a compiler—a set of instructions that converts a more intelligible kind of code to the lower-level code directly processed by the machine. With that tool, she and her lab developed FLOW-MATIC, the first programming language to incorporate English words based on that process. —Molly Olmstead, Slate
Spacewar!
Date: 1961
The first distributed video game
/ this routine handles a non-colliding ship invisibly
/ in hyperspace
hp1, dap hp2
count i ma1, hp2
law hp3 / next step
dac i ml1
law 7
dac i mb1
random
scr 9s
sir 9s
xct hr1
add i mx1
dac i mx1
swap
add i my1
dac i my1
random
scr 9s
sir 9s
xct hr2
dac i mdy
dio i mdx
setup .hpt,3
lac ran
dac i mth
hp4, lac i mth
sma
sub (311040
spa
add (311040
dac i mth
count .hpt,hp4
xct hd2
dac i ma1
hp2, jmp .
Steve Russell via Bitsavers.org
In late 1961 a group of young MIT employees, students, and associates (many of them members of the Tech Model Railroad Club) gained late-night access to a recently donated DEC PDP-1 computer. The leading edge of nonmilitary computing, the PDP-1 sold for $120,000 (that would be a bit more than $1 million today), featured 18-bit word length, and used paper tape for program storage. Over the course of five months, these programmers created a game in which two players control spaceships—the needle and the wedge—that engage in a one-on-one space battle while avoiding the gravity well of a star at center screen.
Spacewar! spread quickly across the early “hacker” community. It was later distributed by DEC with each PDP-1, preloaded in the core memory and ready to demonstrate when installed. The program significantly influenced the small coding community of the 1960s and inspired generations of video game creators. It lives on in emulations and is demonstrated regularly at the Computer History Museum on the last operational PDP-1. Steve Russell, the lead coder, said at a 2018 Smithsonian panel, “It’s more than 50 years old. There are no outstanding user complaints. There are no crash reports. And support is still available.”
BSD co-inventor Dennis Ritchie, for instance, used "dmac" (his middle name was MacAlistair); Stephen R. Bourne, creator of the Bourne shell command line interpreter, chose "bourne"; Eric Schmidt, an early developer of Unix software and now the executive chairman of Google parent company Alphabet, relied on "wendy!!!" (the name of his wife); and Stuart Feldman, author of Unix automation tool make and the first Fortran compiler, used "axolotl" (the name of a Mexican salamander).
Weakest of all was the password for Unix contributor Brian W. Kernighan: "/.,/.," representing a three-character string repeated twice using adjacent keys on a QWERTY keyboard. (None of the passwords included the quotation marks.)
I don't remember any of my early passwords, but they probably weren't much better. //
Magnus • October 15, 2019 5:31 PM
"I would love to learn what Donald E. Knuth's passwords used to look like."
Knuth just closes his eyes and concentrates and the computer logs him in.
The computer needs a password to log in to Knuth.
Digital Literacy — Opening Doors to the Future
Gain valuable skills that prepare you for problem-solving in a digital world.Do you need a way to demonstrate basic computer and digital literacy skills to employers? Completing the Northstar Digital Literacy Assessments can help you identify areas in which you need further education. Once you have mastered the needed skills, you can obtain a Northstar Digital Literacy Certificate by successfully completing the assessments at an approved testing location in a proctored environment. You can also claim a digital badge to put in your Digital Backpack. Once you pass Northstar, which certifies basic skills, you may choose to pursue more advanced training and certifications.
GET CREATIVE
GET CONNECTED
GET CODING
BBC micro:bit is a tiny programmable computer, designed to make learning and teaching easy and fun!
USB stands for Universal Serial Bus and ever since its formation, the USB Implementers Forum have been working hard on the “Universal” part of the equation. USB Type-C, which is commonl… //
These confusing cables and ports present a poor user experience. Just because USB-C could do it all doesn’t mean a particular USB-C port would. Does it supply power? Does it accept power? Does it carry video? There’s no way to tell just by looking. It gets worse as we move away from mainstream devices. For example, Google’s Coral development board has two USB Type-C ports. One is used to supply power, the other communicates USB data, and the only way to tell which is which is to look at PCB silkscreen. Compare this to a barrell jack or other legacy power cord. They certainly weren’t universal, but users didn’t confuse them with data connectors like USB, Firewire, or Ethernet.
Most astronauts were pilots before being recruited into the space program, but their piloting skills might not have cut it in space when their maneuvers had to be so precise because one small slip could mean crashing into the moon or spinning out into the void with no way to get home. The MIT Instrumentation Lab was selected by NASA to develop the guidance, navigation, and control system for Apollo—the first completely digital fly-by-wire system.
A digital fly-by-wire system meant that a computer controlled all aspects of the spacecraft. In the past, pilots used an analog system, a combination of pulleys and levers and cables to manually manipulate the components of an airplane, but fly-by-wire got rid of all the redundant, clunky parts. Basically, the pilot uses a small joystick—also known as a "pickle-stick”—to fly the craft. The stick movements are translated into electronic signals and transmitted by wires to the flight control computer which then tells the aircraft what to do. Before digital fly-by-wire, astronauts for the Mercury and Gemini programs had complete control over their ship, but now they were expected to put their faith in a digital computer, something they weren't comfortable with. But after years of testing and training, the fly-by-wire system proved itself and they learned to trust the computer. Apollo 8 became the first manned space mission to test the digital fly-by-wire system, going around the moon and back. Without it, Neil Armstrong, would never have landed on the moon.
After the success of Apollo 11, Neil Armstrong worked for NASA as the Associate Administrator for Aeronautics. He was asked by the Air Force to help them with their efforts to update their old school military jets that still used analog fly-by-wire flight systems. Armstrong suggested they use the same system he used on Apollo 11. That idea hadn't even occurred to them. While some engineers were wary of putting their lives in the hands of computer, just like past astronauts, Armstrong said, “I just went to the moon with one.” The Air Force eventually agreed and MIT was brought on to modify an F-8 fighter jet, which, in 1972, became the first aircraft to use a digital fly-by-wire system. Its success paved the way for all military and commercial planes to be outfitted with the same revolutionary system and there isn't a single plane today that doesn't use a digital fly-by-wire system, thanks to Neil Armstrong and the Apollo legacy.
they needed a way to hardwire their computer programs and coding so it could not be erased during a loss of power. The system they devised was called rope memory, with software being carefully woven through wire ropes to create physical distinctions between "1s" and "0s," as in the binary computer code.
"Informally, the programs were called ‘ropes’ because of the durable form of read-only memory into which they were transformed for flight, which resembled a rope of woven copper wire,” said MIT engineer Don Eyles. “For the lunar missions, 36K words of ‘fixed’ (read-only) memory, each word consisting of 15 bits plus a parity bit, were available for the program.”
These tiny ropes allowed NASA to store an insane amount of data needed for basic flight procedures without taking up too much room on the already packed ship. The process to weave the software into the ropes was so tedious and slow, it would easily take months to create just one program.
Eyles says that with core rope memory, plus the Apollo’s on-board RAM (erasable) memory, NASA landed the lunar module on the moon with just about 152 kilobytes of memory with running speeds of 0.043 megahertz.
On-board flight software for the manned missions was developed for both the Command and Lunar Module computers. In addition to the operating system, the AGC had both an assembly language and a sophisticated software interpreter developed at the MIT lab that could handle more complex ‘pseudo’ instructions than the AGC. These instructions could simplify navigation programs and handle complex navigation equations in the background so as not to overwhelm the AGC power and memory capabilities.
The famous 1201 and 1202 priority alarm displays that interrupted and replaced the astronauts' normal displays with the Priority Displays during the Apollo 11 lunar descent signaled executive overflow was caused because the rendezvous radar was left on during the landing sequence and was stealing precious “cycles” from the AGC. This is in fact exactly what the computer and the software were meant to do– The MIT IL team (led by Margaret Hamilton) had intentionally designed the software with a priority scheduling capability that could identify the most important commands and that allows those to run without interruption pushing less important commands to the side.
The story about the Apollo 11 landing and the Priority Displays was one of error detection and recovery in real time. It was about the astronauts, mission control, the software and the hardware; and how they all worked together during an emergency as an integrated system of systems. It was about creating new, man-machine and software engineering concepts to do things never done before. Unlike a system where the software (or hardware) might "know" of a serious problem without the pilot's knowing it, the Priority Displays were able to determine right away if a particular alarm had occurred that fell within the category of an "emergency alarm" and they let the astronauts know about it too.
No known software errors ever occurred during any of the Apollo missions. The AGC software influenced the design of systems and software for future spacecraft including Skylab, the Space Shuttle and digital fly-by-wire aircraft systems.
How did a prototype keyboard earn its wings?
NASA and the engineers at the MIT Instrumentation Lab were tasked with creating a guidance computer that would help guide the spacecraft to the moon and back. They decided to go with a completely digital system—something that had never been done before. The Apollo Guidance Computer (AGC) became the central computer for the Apollo missions. But astronauts still needed an interface, something sturdy enough to withstand the rigors of space travel and simple enough for the astronauts to understand.
This was the birth of the DSKY, a leap forward in computer science. Standing for display/keyboard and pronounced “diskey,” the world’s first computer keyboard was developed by Ramon Alonso and his team. The DSKY “was simply a keyboard you find on any computer.” It had a digital display with big buttons and communicated with the AGC via a verb-noun interface. Software engineer Alan Green and his team developed the program that would support the astronauts communications with the computer. Astronauts would punch in the numbers for the action they wanted to take and the program they wanted to affect. This interaction “took the form of a grammatical conversation,” easy enough to use for people in the 60s who had never seen a computer before.
Though it eventually became the interface for the Apollo missions, the DSKY was just a prototype. Alonso and his team never expected it to stick. “And a funny thing began to happen,” Alonso said in an interview, “as we demonstrated ‘Fire Rocket,’ or ‘Display Time,’ or ‘Align Platform,’ some of the big shots would ask, ‘this Verb and Noun, is it going to stay, and fly to the moon?’” Some remarked that it wasn’t scientific or mathematical enough. Despite the odds stacked against it, the DSKY proved to be a reliable tool and contributed to every manned mission to the moon. From humble, linguistic origins to a memorial in the stars, the DSKY is responsible for the success of the Apollo missions.