5333 private links
The Save 418 Movement
We are the teapots.
Status Code 418 states that
Any attempt to brew coffee with a teapot should result in the error code "418 I'm a teapot". The resulting entity body MAY be short and stout.
-- See RFC2324 Section 2.3.2
Go to Google.com/teapot, and see for yourself.
First 500 characters of the BNT162b2 mRNA. Source: World Health Organization
The BNT162b2 mRNA vaccine has this digital code at its heart. It is 4284 characters long, so it would fit in a bunch of tweets. At the very beginning of the vaccine production process, someone uploaded this code to a DNA printer (yes), which then converted the bytes on disk to actual DNA molecules.
Out of such a machine come tiny amounts of DNA, which after a lot of biological and chemical processing end up as RNA (more about which later) in the vaccine vial. A 30 microgram dose turns out to actually contain 30 micrograms of RNA. In addition, there is a clever lipid (fatty) packaging system that gets the mRNA into our cells.
RNA is the volatile ‘working memory’ version of DNA. DNA is like the flash drive storage of biology. DNA is very durable, internally redundant and very reliable. But much like computers do not execute code directly from a flash drive, before something happens, code gets copied to a faster, more versatile yet far more fragile system.
For computers, this is RAM, for biology it is RNA. The resemblance is striking. Unlike flash memory, RAM degrades very quickly unless lovingly tended to. The reason the Pfizer/BioNTech mRNA vaccine must be stored in the deepest of deep freezers is the same: RNA is a fragile flower.
Each RNA character weighs on the order of 0.53·10⁻²¹ grams, meaning there are around 6·10¹⁶ characters in a single 30 microgram vaccine dose. Expressed in bytes, this is around 14 petabytes, although it must be said this consists of around 13,000 billion repetitions of the same 4284 characters. The actual informational content of the vaccine is just over a kilobyte. SARS-CoV-2 itself weighs in at around 7.5 kilobytes.
In May 2020, Microsoft surprised everyone (including me) by releasing the source code to GW-BASIC. Rich Turner (Microsoft) wrote in the announcement on the Microsoft Developer Blog:
Since re-open-sourcing MS-DOS 1.25 & 2.0 on GitHub last year, we’ve received numerous requests to also open-source Microsoft BASIC. Well, here we are! As clearly stated in the repo's readme, these sources are the 8088 assembly language sources from 10th Feb 1983 and are being open-sourced for historical reference and educational purposes. This means we will not be accepting PRs (Pull Requests) that modify the source in any way.
You can find the GW-BASIC source code release at the GW-BASIC GitHub. And yes, Microsoft used the MIT License, which makes this open source software.
https://devblogs.microsoft.com/commandline/microsoft-open-sources-gw-basic/
Some languages are better than others at preventing graduate programmers from adopting bad habits – C# and Java, for example. But Lavenne says he discourages developers from using other languages entirely. "I don't want an engineer writing in C or C++, and the reason is it's too dangerous of a language," he explains.
"There are so many potential errors they can create that modern languages like C#, TypeScript, Java, Python can prevent…We don't want them writing in those languages at all." //
Hands-on experience will always be a deciding factor – though Lavenne acknowledges that the majority of students will be lacking this by default. Instead, he suggests university courses and coding programs encourage as much project work as possible – which will at least equip them with a working knowledge of the various components of the software development cycle.
There are also a handful of specific tools and technologies that Lavenne feels that every aspiring developer should have under their belt.
"Putting an emphasis on JavaScript and TypeScript is important; Node.js is a moving force of the world right now in web technologies and others. People have to start learning TypeScript in school," he says. //
On the skillsets for languages that are super marketable; the technologies that are very marketable today are web and APIs. Every single software engineer that will come out on the market will work with APIs – they have to speak APIs, they have to speak JSON. XML is fading out into the distance; the world is speaking JSON from computer to computer, and REST APIs are everything."
Today, any application being built is going to be distributed and on the cloud. This means that deep and specific knowledge of cloud platforms is going to put a developer in good stead with potential employers.
Thirty years ago, Linus Torvalds was a 21 year old student at the University of Helsinki when he first released the Linux Kernel. His announcement started, “I’m doing a (free) operating system (just a hobby, won't be big and professional…)”. Three decades later, the top 500 supercomputers are all running Linux, as are over 70% of all smartphones. Linux is clearly both big and professional.
For three decades, Linus Torvalds has led Linux Kernel development, inspiring countless other developers and open source projects. In 2005, Linus also created Git to help manage the kernel development process, and it has since become the most popular version control system, trusted by countless open source and proprietary projects. //
Regarding creating Git and then handing it off to Junio Hamano to improve and maintain, Linus noted, "I don't want to claim that programming is an art, because it really is mostly just about 'good engineering'. I'm a big believer in Thomas Edison's 'one percent inspiration and ninety-nine percent perspiration' mantra: it's almost all about the little details and the everyday grunt-work. But there is that occasional 'inspiration' part, that 'good taste' thing that is about more than just solving some problem - solving it cleanly and nicely and yes, even beautifully. And Junio had that 'good taste'." //
I very much don't regret the choice of license, because I really do think the GPLv2 is a huge part of why Linux has been successful.
Money really isn't that great of a motivator. It doesn't pull people together. Having a common project, and really feeling that you really can be a full partner in that project, that motivates people, I think. //
I write very little code these days, and haven't for a long time. And when I do write code, the most common situation is that there's some discussion about some particular problem, and I make changes and send them out as a patch mainly as an explanation of a suggested solution. //
Because all my real work is spent on reading and writing emails. It's mostly about communication, not coding. In fact, I consider this kind of communication with journalists and tech bloggers etc to literally be part of my workday - it may get lower priority than actual technical discussions, but I do spend a fair amount of time on things like this too.
test regular expressions filter
malor Ars Tribunus Angusticlavius et Subscriptor
jump to post
I think it's important to point out that this case covers only one kind of API, a direct programming API where you're incorporating libraries into your main program. Because of the rules of most computer languages, this means that the header files and user-written program files have to look almost exactly the same in the function definition and call location. If they don't, it won't work. If you're trying to call the C standard library printf(), and you call it as printg(), ain't nothing gonna happen.
Thus, any programmer providing a replacment printf(), if he or she wants it to work with existing code, must generate a printf() header in their library that will look pretty much exactly like every other printf() in every other C standard library. It doesn't mean they copied it, it means that there's only one possible way to express that idea. Thus, a reimplemented library of any kind will, by necessity, have a duplicate line for each function. If it doesn't, it won't work as a drop-in replacement.
However, many other APIs are network-based. It seems unlikely to me that copyright would ever apply there, because each provider of an API will be able to write their own code. There might be a few lines of identical source, but for the most part, the code base will be very different. Both Apache and nginx, for instance, support the http API, but there probably isn't much duplicate code between the two projects. Either can be used as a webserver, but it will be instantly apparent that each is a unique creation.
And then compare those with a hypothetical web server written in, say, Rust..... the new codebase would look absolutely nothing like its predecessors. Copyright protection would even more clearly not apply.
API is not a well-defined term, and I think the Supreme Court probably got this exactly right.... this type of API infringement is not covered under copyright, because it's functional. This is the only kind of API infringement where copyright is likely to apply, so a clear decision that it does not will probably put a stake in the whole idea. Almost any other kind of API 'infringement' would obviously not be a copyright matter.
Google lays out the benefits of Rust over C/C++, saying, "Rust provides memory safety guarantees by using a combination of compile-time checks to enforce object lifetime/ownership and runtime checks to ensure that memory accesses are valid. This safety is achieved while providing equivalent performance to C and C++." In line with similar stats that Microsoft has published, Google's blog post says that "memory safety bugs continue to be a top contributor of stability issues, and consistently represent ~70% of Android’s high severity security vulnerabilities."
Become a Better UX Writer in 15 Days
Get a UX writing prompt in your inbox every weekday for 14 days—and a final full-length content challenge on day 15. Actual prompts from the largest product organizations in the world. No spam. Just practice. Free forever.
People ignore design that ignores people — Frank Chimero
Rubber ducking is more than just a funny phrase — it is and has been for many developers, a godsend. Simple in its nature, yet powerful in its execution, rubber ducking can save you a ton of time when running into challenges with your code.
By definition, rubber ducking is short form for ‘rubber duck debugging’ and is simply a method of debugging code. Conceptualized from a book where a programmer carried around a rubber duck with them, and had to explain their code line-by-line to the rubber duck. Odd origination, brilliant concept!
The method behind this concept really is very simple — explain your code to someone or to an inanimate object, or to the sky (as long as you get those words out of your head!). This is so that you can try and break down what it is you are trying to accomplish with the code that has got you stuck in the first place. By doing so, you are being forced to get out of your own head and actually explain what is going on with your code. Very often, by rubber ducking a problem, a developer can run into a solution without having to do any Googling whatsoever.
Side note, while rubber ducking an inanimate object works tremendously well, why not take it up a notch and rubber duck with a co-worker or a friend? If you think explaining something to that cute orchid plant on your desk can work wonders, imagine how great it feels to rubber duck with someone else who probably has some awesome advice or insight into your problem too! You never know who has had a coding dilemma practically identical to your own, and has exactly the advice you need to resolve that challenge.
On top of that, even rubber ducking with someone who has no experience with coding whatsoever can be a fantastic asset as well. By explaining your code to someone with no actual understanding of code, you are being forced to break things down into really simplistic terms, in a way that a beginner would understand. This helps to remove all the junk from your explanation, and hopefully demystify things altogether!
most people would think that if we’ve built, for example, a space ship or a complex airplane in the past, we could build it again at any time. But no, if we weren’t building a particular plane uninterruptedly, then after just 50 years it is already easier to develop a new one from scratch rather than trying to revive old processes and documentation. Knowledge does not automatically transfer to the next generation.
In programming, we are developing abstractions at an alarming rate. When enough of those are stacked, it becomes impossible to figure out or control what’s going on down the stack. This is where my contribution begins: I believe I have found some pretty vivid examples of how the ladder of abstractions has started to fall and nobody can do anything about it now because we all are used to work only at the very tip of it.
Enter macOS Catalina. Every year Apple releases a new operating system and every year it needs a flagship feature to promote it. This year it was a long-overdue standalone Music app. Well, what could be simpler, right? List of files, categories, filters, smart lists. All that has been around in iTunes at least since 2001. But even if it wasn’t, how hard is it to build a decent music player? Many companies order of magnitude smaller than Apple have done it successfully in the past.
And yet, it didn’t go smoothly. Guys at Annoying.Technology have some great examples. //
As Philipp correctly mentioned,
It’s not some odd, third-party utility that somehow looks a bit funky on an obscure version of macOS. It’s the flagship rewrite of the new Music.app shipping with Catalina. //
Yes, these particular bugs are pretty minor and probably do not affect business in the short run, only Apple’s reputation. Still, it is a big deal. Imagine how tall, opaque and unstable that ladder of abstractions is that it’s even possible to fail such a simple thing as selecting an item in a list??? It is a freaking list and if you click it, it should select a thing that you just clicked. How hard of a task do you think that is? Why it has worked flawlessly since the first iPod with a monochrome screen and quarter of computing power of modern watch, but can’t be done in a flagship product of the most advanced operating system in the world?
Because advanced means complex. So complex that no one could reasonably understand it or have control over it, even if they wanted. Apple DID want it. But even they couldn’t. Even with all the resources in the world. //
I don’t have numbers, but I’ve heard Gmail rewrite also made it much slower with no apparent new functions. It’s still pretty drastic if you put GMail next to Fastmail, or Twitter next to Tweetdeck, both of which didn’t get any full rewrites in the last decade, so you can see how fast even Web UI could be if we weren’t constantly climbing up the abstraction ladder.
Docker and Electron are the most hyped new technologies of the last five years. Both are not about improving things, figuring out complexity or reducing it. Both are just compromised attempts to hide accumulated complexity from developers because it became impossible to deal with.
Democracy Live, which appears to have no privacy policy, receives sensitive personally identifiable information -- including the voter's identity, ballot selections, and browser fingerprint -- that could be used to target political ads or disinformation campaigns.Even when OmniBallot is used to mark ballots that will be printed and returned in the mail, the software sends the voter's identity and ballot choices to Democracy Live, an unnecessary security risk that jeopardizes the secret ballot. We recommend changes to make the platform safer for ballot delivery and marking. However, we conclude that using OmniBallot for electronic ballot return represents a severe risk to election security and could allow attackers to alter election results without detection.
There are lots of very smart people doing fascinating work on cryptographic voting protocols. We should be funding and encouraging them, and doing all our elections with paper ballots until everyone currently working in that field has retired.
"A wicked fast source browser"
{OpenGrok
OpenGrok is a fast and usable source code search and cross reference engine. It helps you search, cross-reference and navigate your source tree. It understands various program file formats and history from many Source Code Management systems. In other words it lets you grok (profoundly understand) source code and is developed in the open, hence the name OpenGrok. It is written in Java.
Requirements:
- Latest Java 1.8
- A servlet container like GlassFish or Tomcat (8.x or later) also running with Java at least 1.8
- Universal ctags
Given a version number MAJOR.MINOR.PATCH, increment the:
- MAJOR version when you make incompatible API changes,
- MINOR version when you add functionality in a backwards compatible manner, and
- PATCH version when you make backwards compatible bug fixes.
Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format. //
For this system to work, you first need to declare a public API. This may consist of documentation or be enforced by the code itself. Regardless, it is important that this API be clear and precise. Once you identify your public API, you communicate changes to it with specific increments to your version number. Consider a version format of X.Y.Z (Major.Minor.Patch). Bug fixes not affecting the API increment the patch version, backwards compatible API additions/changes increment the minor version, and backwards incompatible API changes increment the major version.
I call this system “Semantic Versioning.” Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next. //
This is not a new or revolutionary idea. In fact, you probably do something close to this already. The problem is that “close” isn’t good enough. Without compliance to some sort of formal specification, version numbers are essentially useless for dependency management. By giving a name and clear definition to the above ideas, it becomes easy to communicate your intentions to the users of your software. Once these intentions are clear, flexible (but not too flexible) dependency specifications can finally be made.
A simple example will demonstrate how Semantic Versioning can make dependency hell a thing of the past. Consider a library called “Firetruck.” It requires a Semantically Versioned package named “Ladder.” At the time that Firetruck is created, Ladder is at version 3.1.0. Since Firetruck uses some functionality that was first introduced in 3.1.0, you can safely specify the Ladder dependency as greater than or equal to 3.1.0 but less than 4.0.0. Now, when Ladder version 3.1.1 and 3.2.0 become available, you can release them to your package management system and know that they will be compatible with existing dependent software.
This is the second in a series of posts featuring protips from GitHubbers. This post highlights some great browser extensions, URL hacks, keyboard shortcuts, plus a few dad jokes.
I’ve been using GitHub as an engineering, business development, and marketing manager for over 10 years now—quite the career trajectory! Along the way, I’ve picked up a few tricks to manage my notifications, quickly locate content and create pull requests, push markdown to its limits, and personalize my experience with some third party extensions. I’m sharing a few of my favorites, and hopefully there’s something new and helpful for you.
The really, really short answer is that you should not. The somewhat longer answer is that just because you are capable of building a bikeshed does not mean you should stop others from building one just because you do not like the color they plan to paint it. This is a metaphor indicating that you need not argue about every little feature just because you know enough to do so. Some people have commented that the amount of noise generated by a change is inversely proportional to the complexity of the change."
From: Poul-Henning Kamp phk@freebsd.org
My last pamphlet was sufficiently well received that I was not scared away from sending another one, and today I have the time and inclination to do so. //
The sleep(1) saga is the most blatant example of a bike shed discussion we have had ever in FreeBSD. The proposal was well thought out, we would gain compatibility with OpenBSD and NetBSD, and still be fully compatible with any code anyone ever wrote.
Yet so many objections, proposals and changes were raised and launched that one would think the change would have plugged all the holes in swiss cheese or changed the taste of Coca Cola or something similar serious.
"What is it about this bike shed ?" Some of you have asked me.
It's a long story, or rather it's an old story, but it is quite short actually. C. Northcote Parkinson wrote a book in the early 1960'ies, called "Parkinson's Law", which contains a lot of insight into the dynamics of management.
You can find it on Amazon, and maybe also in your dads book-shelf, it is well worth its price and the time to read it either way, if you like Dilbert, you'll like Parkinson.
Somebody recently told me that he had read it and found that only about 50% of it applied these days. That is pretty darn good I would say, many of the modern management books have hit-rates a lot lower than that, and this one is 35+ years old.
In the specific example involving the bike shed, the other vital component is an atomic power-plant, I guess that illustrates the age of the book.
Parkinson shows how you can go in to the board of directors and get approval for building a multi-million or even billion dollar atomic power plant, but if you want to build a bike shed you will be tangled up in endless discussions.
Parkinson explains that this is because an atomic plant is so vast, so expensive and so complicated that people cannot grasp it, and rather than try, they fall back on the assumption that somebody else checked all the details before it got this far. Richard P. Feynmann gives a couple of interesting, and very much to the point, examples relating to Los Alamos in his books.
A bike shed on the other hand. Anyone can build one of those over a weekend, and still have time to watch the game on TV. So no matter how well prepared, no matter how reasonable you are with your proposal, somebody will seize the chance to show that he is doing his job, that he is paying attention, that he is here.
In Denmark we call it "setting your fingerprint". It is about personal pride and prestige, it is about being able to point somewhere and say "There! I did that." It is a strong trait in politicians, but present in most people given the chance. Just think about footsteps in wet cement.
Grace Hopper was a phenomenon. She earned a doctorate in mathematics from Yale, was a professor at Vassar, and left the U.S. Navy with the rank of rear admiral. Her contributions to the field of computing can be judged by the number of foundations and programs that have been created in her memory. //
Driven to create a programming language closer to English than the machine-code computers understand, Hopper developed the first compiler. This opened the door for the first compiled languages, such as FLOW-MATIC. This earned her a seat on the Conference/Committee on Data Systems Languages (CODASYL) of 1959.
She was also instrumental in the specification and development of the Common Business-Oriented Language (COBOL). The first meeting took place on June 23, 1959, and its report and specification of the COBOL language followed in April 1960.
COBOL contained some groundbreaking concepts. Arguably, the most significant of these was the ability to run on hardware produced by different manufacturers, which was unprecedented at the time.
The language was elaborate and provided a near-English vocabulary for programmers to work with. It was designed to handle huge volumes of data and to be exceptionally mathematically accurate.
Its vocabulary of reserved words (the words that make up the language) runs close to 400. A programmer strings these reserved words together so they make syntactical sense and create a program.
Any programmer who’s familiar with other languages will tell you 400 is an incredible number of reserved words. For comparison, the C language has 32, and Python has 33. //
As clunky as it might seem today, COBOL was revolutionary when it launched. It found favor within the financial sector, federal government, and major corporations and organizations. This was due to its scalability, batch handling capabilities, and mathematical precision. It was installed in mainframes all over the world, took root, and flourished. Like a stubborn weed, it just won’t die.
Our dependency on systems that still run on COBOL is astonishing. A report from Reuters in 2017 shared the following jaw-dropping statistics:
- There are 220 billion lines of COBOL code still in use today.
- COBOL is the foundation of 43 percent of all banking systems.
- Systems powered by COBOL handle $3 trillion of daily commerce.
- COBOL handles 95 percent of all ATM card-swipes.
- COBOL makes 80 percent of all in-person credit card transactions possible. //
The programmers who know COBOL are either retired, thinking about retiring, or dead. We’re steadily losing the people who have the skills to keep these vital systems up and running. New, younger programmers don’t know COBOL. Most also don’t want to work on systems for which you have to maintain ancient code or write new code.
This is such a problem that Bill Hinshaw, a COBOL veteran, was coerced out of retirement to found COBOL Cowboys. This private consulting firm caters to desperate corporate clients that can’t find COBOL-savvy coders anywhere. The “youngsters” at COBOL Cowboys (the motto of which is “Not Our First Rodeo”) are in their 50s. They believe 90 percent of Fortune 500 business systems run on COBOL. //
This is a widespread and deeply embedded problem. A 2016 report from the Government Accountability Office listed COBOL systems running on mainframes up to 53-years-old. These include systems used to process data related to the Department of Veterans Affairs, The Department of Justice, and the Social Security Administration. //
IDENTIFICATION DIVISION.
PROGRAM-ID. Hello-World.
DATA DIVISION.
FILE SECTION.
WORKING-STORAGE SECTION.
PROCEDURE DIVISION.
MAIN-PROCEDURE.
DISPLAY "Hello world, from How-To Geek!"
STOP RUN.
END PROGRAM Hello-World.
Although Git is a very powerful tool, I think most people would agree when I say it can also be... a total nightmare 😐 I've always found it very useful to visualize in my head what's happening when working with Git: how are the branches interacting when I perform a certain command, and how will it affect the history? Why did my coworker cry when I did a hard reset on master, force pushed to origin and rimraf'd the .git folder?