5333 private links
ANY LIST OF THE MOST FAMOUS FINNS in the world today would be incomplete without the inclusion of Linus Torvalds, the software developer who created Linux, the world’s largest open-source operating system. //
Although Torvalds resides in the heart of Silicon Valley today and has a net worth of around $150 million, his origins are somewhat humbler, although by no means working class. He was born on 28 December 1969 to Finnish Swedish parents, one of whom is the journalist and MP Nils Torvalds. His grandfathers were Leo Törnqvist, another famous Finland Swede who was Finland’s first-ever statistician, and Ole Torvalds, the celebrated poet, and journalist. It was Törnqvist, his maternal grandfather, who first sparked Torvalds’ interest in computing, after asking him to help program his brand-new Commodore VIC-20 computer in 1981. //
In 1997, Torvalds left Helsinki and decamped in California, where he continues to work with the Open-Source Development Lab, a consortium of big tech players such as Intel, Siemens, and IBM that advocate for Linux development. His wife of almost 30 years is the six-time Finnish national karate champion Tove Torvalds, and both have been US citizens since 2010. However, Torvalds still returns to his homeland to give talks, especially at his old alma mater, the University of Helsinki.
“I’m very proud of the fact that there’s actually a fair number of people still involved with the kernel that came in in 1991 — I mean, literally 30 years ago.” (Hohndel is one of them.)
The longevity of the Linux community is especially impressive in light of how few contributors there were back in 1991, Torvalds said: “I think that’s a testament to how good the community, on the whole, has been, and how much fun it’s been.”
Yes, fun — and Torvalds still considers that one of the building blocks of the Linux community; “just for fun,” he said, is part of what he still strives for.
It comes up when people talk about the possibility of writing some Linux kernel modules using Rust. “From a technical angle, does that make sense?” Torvalds asked. “Who knows. That’s not the point. The point is for a project to stay interesting — and to stay fun — you have to play with it.” //
The keynote conversation closed with Hohndel asking what they should do for the 50th anniversary of Linux, in the year 2041, when both of them will be in their 70s.
Characteristically Torvalds answered that, just like with the Linux kernel, he doesn’t make plans more than six months out. But the question did draw some reflection. “I’ve been very happy doing the kernel for 30 years,” Torvalds began thoughtfully.
“Somehow I don’t see myself doing kernel programming when I’m 70. But on the other hand, I didn’t see myself doing kernel programming when I was 50 either, a few years back. So… we’ll see.”
Approach 1: Using a cronjob to manually copy the certificate
- Make sure syncthing has the https-key.pem and https-cert.pem files present in it’s home directory; my commands assume the directory is /home/syncthing/.config/syncthing [that’s my setup). Make sure the permissions are correct, meaning the files are owned by the user running syncthing. The easiest way to achieve this is by deleting the current files while syncthing is stopped.
- Upon the next start, syncthing will re-generate the https-key.pem and https-cert.pem files with the correct permissions (files are owned by user running syncthing). Now, you only need to overwrite the files - overwriting existing files does not change their permissions.
- Open a shell/terminal on the machine, preferably as root or any other user that definetly has access to all certificates inside of /etc/letsencrypt. You can get root either by typing in su or by prefixing the following command with sudo.
- Type crontab -e to edit the crontab of the current user
The file will be opened with some text editor, like nano. In the file, below the comments you can add the following lines:
@daily cp /etc/letsencrypt/live/[domain]/privkey.pem /home/syncthing/.config/syncthing/https-key.pem
@daily cp /etc/letsencrypt/live/[domain]/fullchain.pem /home/syncthing/.config/syncthing/https-cert.pem - This would copy the certificates from the let’s encrypt directory daily to the syncthing directory, overwriting existing files but without modifying file permissions. The solution is simple, but definetly not the best. The @daily should be supported by pretty much every standard cron
As you can see in the chart above, btrfs-raid1 differed pretty drastically from its conventional analogue. To understand how, let's think about a hypothetical collection of "mutt" drives of mismatched sizes. If we have one 8T disk, three 4T disks, and a 2T disk, it's difficult to make a useful conventional RAID array from them—for example, a RAID5 or RAID6 would need to treat them all as 2T disks (producing only 8T raw storage before parity).
However, btrfs-raid1 offers a very interesting premise. Since it doesn't actually marry disks together in pairs, it can use the entire collection of disks without waste. Any time a block is written to the btrfs-raid1, it's written identically to two separate disks—any two separate disks. Since there are no fixed pairings, btrfs-raid1 is free to simply fill all the disks at the same rough rate proportional to their free capacity. //
As any storage administrator worth their salt will tell you, RAID is primarily about uptime. Although it may keep your data safe, that's not its real job—the job of RAID is to minimize the number of instances in which you have to take the system down for extended periods of time to restore from proper backup.
Once you understand that fact, the way btrfs-raid handles hardware failure looks downright nuts. What happens if we yank a disk from our btrfs-raid1 array above? //
Btrfs' refusal to mount degraded, automatic mounting of stale disks, and lack of automatic stale disk repair/recovery do not add up to a sane way to manage a "redundant" storage system. //
Believe it or not, we've still only scratched the surface of btrfs problems. Similar problems and papercuts lurk in the way it manages snapshots, replication, compression, and more. Once we get through that, there's performance to talk about—which in many cases can be orders of magnitude slower than either ZFS or mdraid in reasonable, common real-world conditions and configurations.
If you use the ls -li command (the -i option shows the inode number), you’ll see that its link count is 2. The link count is after the file permission field. //
You should not create a hard link to a directory
You can create a soft link to a directory but when you try to create a hard link to a directory, you’ll see an error like this:
ln: newdir/test_dir: hard link not allowed for directory
Why are hard links not allowed for directory? It’s because using hard links for directory may break the filesystem. Theoretically, you can create hard link to directories using -d or -F option. But most Linux distributions won’t allow that even if you are root user.
https://askubuntu.com/questions/210741/why-are-hard-links-not-allowed-for-directories
//
Bonus Tip: How to find all hard links to a given file
If you see that a file has more than one link count, you may get curious about the other hard links associated with it.
One way to find that is using the inode number of the file. You can use the ls -i command or the stat command to get the inode number.
Once you have the inode number, you can see all the links associated with it using the find command.
find . -inum inode_number
Linus Torvalds will pull Paragon Software's NTFS driver into the 5.15 kernel source – but he complained about the use of a GitHub merge in the submission, saying that GitHub "creates absolutely useless garbage merges." //
First, he said the pull request should have been signed. "In a perfect world, it would be a PGP signature that I can trace directly to you through the chain of trust, but I've never actually required that," he said.
Second, he noted that the code in the pull request included merge commits done with the GitHub web user interface. "That's another of those things that I really don't want to see – github creates absolutely useless garbage merges, and you should never ever use the github interfaces to merge anything," he said.
He added: "[G]ithub is a perfectly fine hosting site, and it does a number of other things well too, but merges is not one of those things."
Torvalds has complained about aspects of GitHub before, saying in 2012: "I don't do github pull requests. github throws away all the relevant information, like having even a valid email address for the person asking me to pull. The diffstat is also deficient and useless."
Note that the git request-pull command is different from the GitHub pull request feature. The ensuing forthright thread has more information on the subject.
A common system task is backing up files – that is, copying files with the ability to go back in time and restore them. For example, if someone erases or overwrites a file but needs the original version, then a backup allows you to go back to a previous version of the file and restore it. In a similar case, if someone is editing code and discovers they need to go back to a version of the program from four days earlier, a backup allows you to do so. The important thing to remember is that backups are all about copies of the data at a certain point in time.
In contrast to backing up is “replication.” A replica is simply a copy of the data when the replication took place. Replication by itself does not allow you to go back in time to retrieve an earlier version of a file. However, if you have a number of replicas of your data created over time, you can sort of go back and retrieve an earlier version of a file, but you need to know when the replica was made, then you can copy the file from that replica.
The US Bankruptcy Court for the District of Delaware, which has been overseeing the slow and painful bankruptcy of the remains of SCO, announced that the TSG Group, which represents SCO's debtors, has settled with IBM and resolved all the remaining claims between TSG and IBM "Under the Settlement Agreement, the Parties have agreed to resolve all disputes between them for a payment to the Trustee [TLD], on behalf of the Estates [IBM], of $14,250,000."
In return, TLD gives up all rights and interests in all litigation claims pending or that may be asserted in the future against IBM and Red Hat, and any allegations that Linux violates SCO's Unix or Unixware intellectual property.
Why is TLD, the former SCO, finally agreeing to let this drop. Because, as some of us knew 18 years ago, they never had a case. Or, as TLD's legal representative, Blank Rome bankruptcy attorney Stanley B. Tarr, put it in a motion, "succeeding on the unfair competition claims will require proving to a jury that events occurring many years ago constituted unfair competition and caused SCO harm. Even if SCO were to succeed in that effort, the amount of damages it would recover is uncertain and could be significantly less than provided by the Settlement Agreement."
You think?
It's been 30 years since Finnish graduate student Linus Torvalds drafted a brief note saying he was starting a hobby operating system. The world would never be the same.
Q:
I've written a script to wipe all disks on any machine that netboots. It works fairly well, but I'd like to add verification of the final "zeroing" pass, and only shutdown the machines if the drive reads all zeros successfully, and otherwise show the error. I'm also trying to both minimize dependencies on ports if possible (currently only using pv(1) to show status), and make the final "read" pass perform quickly (the write passes are already fast).
However, I can't figure out what the best way to do this is. It seems the easiest way is to use another port, security/bcwipe, and run it with bcwipe -bfmz /dev/adaX, which both zeros the disks and verifies the write, but this requires another port. I've looked at other, simpler ways of doing using basic system utilities, like od(1), but then the disks read slowly (about 1/3rd of the speed dd(1) will do with bs=1m). I suspect this is because you can't specify a buffer size for od, and it's reading small chunks. //
dd if=/dev/zero of=/dev/adaX bs=1m
A:
In this particular case I would write a tiny C program for this purpose. The C compiler is in the base system, so you don't need to install additional ports.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BLOCKSIZE (128 * 1024)
uint64_t data[(BLOCKSIZE + 7) / 8];
int
main (void)
{
int inbytes, i;
while ((inbytes = read(STDIN_FILENO, data, BLOCKSIZE)) > 0)
for (i = 0; i < (inbytes + 7) / 8; i++)
if (data[i])
exit (1);
if (inbytes < 0) {
perror (NULL);
exit (2);
}
return 0;
}
Save that source code as “testzero.c”.
Then compile it like this: cc -O2 -o testzero testzero.c
That'll give you a binary named “testzero”. Put it somewhere in your $PATH so your shell can find it, or type “./testzero” to run it from the current directory. The program reads a file (or device) from standard input and exits with code 0 if the file is all zeros, so you can use it in a shell script like this:
Code:
if testzero < /dev/ada0; then
echo "allright, disk is zero"
shutdown -p now
else
echo "something went wrong!"
yes | tr -c x '\a' # call attention
fi
Note that I have set the block size to 128 KB, not 1 MB. In my (limited) testing it was faster with 128 KB (actually, as fast as the speed of the physical disk). This may depend on your kernel settings, CPU cache or file system parameters, though. YMMV, so you might want to test different values. If you change it in the source code, don't forget to recompile the program. //
For simplicity, the program reads the file or device from standard input, so there is no reason to parse the argument vector, so I just specified it as “void” instead of the usual “int argc, char *argv[]”. Besides, the compiler issues a warning if you specify argc and argv without actually using them. Note that you could omit the formal parameter completely (i.e. “()” instead of “(void)”), but I think it's a good habit to specify “void” for documentation purposes, so the author's intention is clear. For similar reasons the return type is specified as “int” – actually that wouldn't be necessary because int is the default return type (not void!). In other words: You can just write “main ()” instead of “int main (void)” if you want – both mean exactly the same, but I adhere to the Python motto “explicit is better than implicit”.
Just in case someone wonders: There is no difference (performance-wise) between using standard input vs. opening a file specified on the command line. Just make sure that the shell opens the file directly and passes it to the program (using redirection syntax with “<”). Do not use something like “cat /dev/foo | testzero” because this will create an additional process for the cat command and connect it with a pipe to the testzero command – the pipe may reduce the performance considerably. (NB: The cat command is often abused; I guess that 99% of uses of cat in shell scripts are superfluous, sometimes even harmful.) //
I made a few modifications to his code to report where the error lies if it finds one, but otherwise this does appear to read at the maximum disk speed. Thanks!
Edit: If anyone is interested here's the code after the changes I made. It's also a bit more verbose.
C:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BLOCKSIZE (128 * 1024)
uint64_t data[(BLOCKSIZE + 7) / 8];
int main(void) {
int inbytes;
uint64_t intotal = 0;
while((inbytes = read(STDIN_FILENO, data, BLOCKSIZE)) > 0) {
inbytes = (inbytes + 7) >> 3;
for(int i = 0; i < inbytes; i++) {
if(data[i]) {
intotal = (intotal + i) << 3; // Convert back to byte offset
printf("Non-zero byte detected at offset range: %lu to %lu\n", intotal, intotal + 7);
exit(1);
}
}
intotal += inbytes;
}
if(inbytes < 0) {
perror(NULL);
exit(2);
}
printf("Disk is fully zeroed.\n");
return 0;
}
smartctl - Control and Monitor Utility for SMART Disks
SYNOPSIS
smartctl [options] device
DESCRIPTION
smartctl controls the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into most ATA/SATA and SCSI/SAS hard drives and solid-state drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests. smartctl also supports some features not related to SMART. This version of smartctl is compatible with ACS-3, ACS-2, ATA8-ACS, ATA/ATAPI-7 and earlier standards (see REFERENCES below).
In the end, you want something like this:
for i in {1..100}; do cp test.ogg "test$i.ogg"; done
Or, as an alternative
i=0
while (( i++ < 100 )); do
cp test.ogg "test$i.ogg"
done
openssl rand -out sample.txt -base64 805306368
Alternatively, you could use /dev/urandom, but it would be a little slower than OpenSSL:
dd if=/dev/urandom of=sample.txt bs=1G count=1
Personally, I would use bs=64M count=16 or similar:
dd if=/dev/urandom of=sample.txt bs=64M count=16
//
Since, your goal is to create a 1GB file with random content, you could also use yes command instead of dd:
yes [text or string] | head -c [size of file] > [name of file]
Sample usage:
yes 'this is test file' | head -c 100KB > test.file
TestDisk is OpenSource software and is licensed under the terms of the GNU General Public License (GPL v2+).
TestDisk is powerful free data recovery software! It was primarily designed to help recover lost partitions and/or make non-booting disks bootable again when these symptoms are caused by faulty software: certain types of viruses or human error (such as accidentally deleting a Partition Table). Partition table recovery using TestDisk is really easy.
This document explains how to create under Linux a LiveCD running FreeDOS that automatically start TestDisk.
-
Download FreeDOS OEM CD-ROM disc builder assistant
wget -N http://www.fdos.org/bootdisks/ISO/FDOEMCD.builder.zip
-
Download the DOS version of TestDisk & PhotoRec
-
Cleanup the work directory if it already exists
rm -rf FDOEMCD
-
Uncompress the archive
unzip FDOEMCD.builder.zip
-
Uncompress latest version of TestDisk & PhotoRec
cd FDOEMCD/CDROOT
unzip ../../testdisk-6.12.dos.zip
mv testdisk-6.12 testdiskCreate an autorun script
echo "@ECHO OFF" > AUTORUN.BAT
echo "CLS" >> AUTORUN.BAT
echo "CD TESTDISK" >> AUTORUN.BAT
echo "TESTDISK.EXE" >> AUTORUN.BAT -
This script must use DOS newlines, not Unix ones
unix2dos AUTORUN.BAT
-
Create the iso image using mkisofs
cd ..
mkisofs -o testdisk.iso -p "Christophe Grenier" -publisher "www.cgsecurity.org" -V "TestDisk CD" \
-b isolinux/isolinux.bin -no-emul-boot -boot-load-size 4 -boot-info-table -N -J -r \
-c boot.catalog -hide boot.catalog -hide-joliet boot.catalog CDROOT -
Boot from this iso image in qemu emulator
qemu -localtime -boot d -cdrom testdisk.iso -hda disk.dd
- If everything is ok, burn the iso
This is how many modern file system backup programs work. On day 1 you make an rsync copy of your entire file system:
backup@backup_server> DAY1=`date +%Y%m%d%H%M%S`
backup@backup_server> rsync -av -e ssh earl@192.168.1.20:/home/earl/ /var/backups/$DAY1/
On day 2 you make a hard link copy of the backup, then a fresh rsync:
backup@backup_server> DAY2=`date +%Y%m%d%H%M%S`
backup@backup_server> cp -al /var/backups/$DAY1 /var/backups/$DAY2
backup@backup_server> rsync -av -e ssh --delete earl@192.168.1.20:/home/earl/ /var/backups/$DAY2/
“cp -al” makes a hard link copy of the entire /home/earl/ directory structure from the previous day, then rsync runs against the copy of the tree. If a file remains unchanged then rsync does nothing — the file remains a hard link. However, if the file’s contents changed, then rsync will create a new copy of the file in the target directory. If a file was deleted from /home/earl then rsync deletes the hard link from that day’s copy.
In this way, the $DAY1 directory has a snapshot of the /home/earl tree as it existed on day 1, and the $DAY2 directory has a snapshot of the /home/earl tree as it existed on day 2, but only the files that changed take up additional disk space. If you need to find a file as it existed at some point in time you can look at that day’s tree. If you need to restore yesterday’s backup you can rsync the tree from yesterday, but you don’t have to store a copy of all of the data from each day, you only use additional disk space for files that changed or were added.
I use this technique to keep 90 daily backups of a 500GB file system on a 1TB drive.
One caveat: The hard links do use up inodes. If you’re using a file system such as ext3, which has a set number of inodes, you should allocate extra inodes on the backup volume when you create it. If you’re using a file system that can dynamically add inodes, such as ext4, zfs or btrfs, then you don’t need to worry about this.
Hardware failure and a careless user feeling adventurous with powerful utilities such as dd and fdisk can lead to data loss in Linux. Not only that, sometimes spring cleaning a partition or directory can also lead to accidentally deleting some useful files. Should that happen, there’s no reason to despair. With the PhotoRec utility, you can easily recover a variety of files, be it documents, images, music, archives and so on.
Developed by CGSecurity and released under the GPL, PhotoRec is distributed as a companion utility of Testdisk, which can be used to recover and restore partitions. You can use either of these tools to recover files, but each has a job that it’s best suited for. Testdisk is best suited for recovering lost partitions. //
Although initially designed to only recover image files (hence the name), PhotoRec can be used to recover just about any manner of file.
Even better, PhotoRec works by ignoring the underlying filesystem on the specified partition, disk or USB drive. Instead, it focuses on the unique signatures left by the different file types to identify them. This is why PhotoRec can work with FAT, NTFS, ext3, ext4 and other partition types. //
The greatest drawback of PhotoRec – if any tool that can seemingly pull deleted files out of the digital ether can have a drawback – is that it doesn’t retain the original filenames. This means that recovered files all sport a gibberish alpha-numeric name. If this is a deal-breaker for you, consider using Testdisk first to recover your lost files.
To Install Testdisk open a terminal window and first update the software repositories before installing testdisk.
768 MB Ryzen VPS
1x AMD Ryzen 3900X Core
12 GB NVMe SSD
768 MB DDR4 RAM
2000GB Monthly Bandwidth
1Gbps Public Network Port
Full Root Admin Access
1 Dedicated IPv4 Address
KVM / SolusVM
Available in Multiple Locations
$15.00/YEAR!
[ORDER HERE]
https://my.racknerd.com/cart.php?a=add&pid=499
1 GB Ryzen VPS
1x AMD Ryzen 3900X Core
15 GB NVMe SSD
1 GB DDR4 RAM
3000GB Monthly Bandwidth
1Gbps Public Network Port
Full Root Admin Access
1 Dedicated IPv4 Address
KVM / SolusVM
Available in Multiple Locations
$21.49/YEAR!
[ORDER HERE]
It is the late 1990s and the computer server world is dominated by enterprise UNIX operating systems – all competing with each other. Windows 2000 is not out yet and Windows NT 4 is essentially a toy that lesser mortals run on their Intel PCs which they laughingly call ‘servers’. Your company has a commercial UNIX and its called Solaris. Your UNIX is very popular and is a leading platform. Your UNIX however has some major deficiencies when it comes to storage.
IRIX – a competing proprietary UNIX – has the fantastic XFS file system which vastly out performs your own file system which is still UFS (“Unix File System” – originally developed in the early 1980s) and doesn’t even have journalling – until Solaris 7 at least (in November 1998). IRIX had XFS baked into it from 1994. IRIX also had a great volume manager – where as Solaris’ ‘SVM’ was generally regarded as terrible and was an add-on product that didn’t appear as part of Solaris itself until Solaris 8 in 2000. //
ZFS – and sadly btrfs – are both rooted in a 1990s monolithic model of servers and storage. btrfs hasn’t caught on in Linux for a variety of reasons, but most of all its because it simply isn’t needed. XFS runs rings around both in terms of performance, scales to massive volume sizes. LVM supports XFS by adding COW snapshots and clones, and even clustering if you so wanted. I believe the interesting direction in file systems is actually things like Gluster and Ceph – file systems designed with the future in mind, rather than for a server model we’re not running any more. ///
Interesting to compare the comments to the disparaging statements in the article.
ZFS combines hardware and software layers, combines volume, disk & partition management in one application.
It is the only production ready journaled CoW file system with data integrity management.
Btrfs is not production ready.
When Android was launched soon after Apple's own iPhone, Steve Jobs threatened to "destroy" it.
Ever since, and across the world, the rivalry between both systems has animated users.
Now the results are in: worldwide, consumers clearly prefer one side — and it's not Steve Jobs'. //
Feelings between Android and Apple were pretty tribal from the get-go. It was Steve Jobs himself who said, when Google rolled out Android a mere ten months after Apple launched the iPhone, "I'm going to destroy Android, because it's a stolen product. I'm willing to go thermonuclear war on this."
Buying a phone is like picking a side in the eternal feud between the Hatfields and the McCoys. Each choice for automatically comes with an in-built arsenal of arguments against.