5333 private links
Beginners may find it difficult to relate the facts from the formal documentation on the BSD rc.d framework with the practical tasks of rc.d scripting. In this article, we consider a few typical cases of increasing complexity, show rc.d features suited for each case, and discuss how they work. Such an examination should provide reference points for further study of the design and efficient application of rc.d.
Install FreeBSD-13.2 on a dedicated server from a Linux rescue environment
This article is the first of a four-part series on building your own NAS on FreeBSD. This series will cover:
- Selecting a storage drive interface that meets your capacity and performance requirements both now and into the future.
- Why it makes sense to build your own NAS using FreeBSD rather than installing a NAS distribution (even a FreeBSD-based one). We’ll also discuss which configuration and tuning settings are needed.
- The nitty-gritty on sharing: configuring NFS, Samba, and iSCSI shares.
- Software maintenance and monitoring your NAS. //
https://klarasystems.com/articles/part-2-tuning-your-freebsd-configuration-for-your-nas/
https://klarasystems.com/articles/part-3-building-your-own-freebsd-based-nas-with-zfs/
They say you should blog about problems you’ve solved. So here is a blog post about today’s problem: Figuring out how to configure FreeBSD services. We’ll break down the configuration for a simple service, linking you to all the relevant docs along the way.
What is mtree?
mtree(8) is a utility included in the base system (/usr/sbin/mtree) and can be used to compare two directory structures thus allowing you to spot any kind of difference. By default it does this by comparing file size (in bytes) and type, last modification time, file owner, group owner, the permissions as a numeric value, any optional flags (see ls -lo, and also chflags(1)) and finally any optional soft or hard links the file might have.
But there's a whole lot more you can do here.
Why I'm writing this guide (short editorial section)
The main reason I'm writing this guide is because I think not many people use mtree to its full potential. But to make matters worse I also think the manual page doesn't do a good job. I mean... If all you do in the EXAMPLES section is to point people to some parameters without actually showing them any examples on how to use those...
FreeBSD Bootcode Updater Utility (Experimental)
This is an attempt to make an easy to use FreeBSD bootcode updater utility for GPT/BIOS and EFI system boot.
Installation
For now there is no installer yet, but you can simply copy and paste the single line command below on ssh for installation:
fetch --no-verify-peer https://github.com/JRGTH/BSD-Bootcode-Updater/archive/master.zip && tar -xvf master.zip --strip-components 1 'BSD-Bootcode-Updater-main/bootcode-update' && chmod 555 bootcode-update && mv bootcode-update /usr/local/sbin/ && rm master.zip && rehash
I am known as a strong ZFS Boot Environment supporter … and not without a reason. I have stated the reasons ‘why’ many times but most (or all) of them are condensed here – https://is.gd/BECTL – in my presentation about it.
The upcoming FreeBSD 13.0-RELEASE looks very promising. In many tests it is almost TWICE as fast as the 12.2-RELEASE. Ouch!
Having 12.2-RELEASE installed I wanted to check 13.0-BETA* to check if things that are important for me – like working suspend/resume for example – work as advertised on the newer version. It is the perfect task that can be achieved by using ZFS Boot Environments.
In the example below we will create entire new ZFS Boot Environment with clone of our current 12.2-RELEASE system and upgrade it there (in BE) to the 13.0-BETA3 version … and there will only be required on reboot – not three as in typical freebsd-update(8) upgrade procedure.
I assume that you have FreeBSD 12.2-RELEASE installed with ZFS (default ZFS FreeBSD install) and its installed in UEFI or UEFI+BIOS mode.
Here are the steps that will be needed.
ou may have noticed that there is a trailing slash (/) at the end of the first argument in the above commands:
rsync -a dir1/ dir2
This is necessary to mean “the contents of dir1”. The alternative, without the trailing slash, would place dir1, including the directory, within dir2. This would create a hierarchy that looks like:
~/dir2/dir1/[files]
Always double-check your arguments before executing an rsync command. Rsync provides a method for doing this by passing the -n or --dry-run options. The -v flag (for verbose) is also necessary to get the appropriate output:
The -P flag is very helpful. It combines the flags --progress and --partial. The first of these gives you a progress bar for the transfers and the second allows you to resume interrupted transfers:
rsync -azP source destination
Q: How can I identify which /dev is which physical hard drive? Which drive is which?!?
A: This is an especially good thing to know when you are trying to replace a failed drive in a SoftRAID array! As of Version 0.7.4020 the serial number is displayed on the WebGUI page Disks > Management > HDD Management if the drive reports it.
run this one-liner is bash:
for i in $(sysctl -n kern.disks);do printf "%s\t%s\n" $i "$(smartctl -a /dev/$i | grep "Serial Number")";done | sort
This will output a list of all disks' assigned /dev's and serial numbers.
If you want to do it with a script or with the CLI, remember that the disk cannot be mounted or otherwise be in use. If you have a few disks to wipe, it will save time in the long run if you use a script like the one shown below. This script wipes the first and last 4096 kilobytes of data from a drive ensuring that any partitioning or MetaData is gone and you can then reuse your drive. Warning, all other data on the drive will become inaccessible!:
#!/bin/sh
echo "What disk do you want"
echo "to wipe? For example - ada1 :"
read disk
echo "OK, in 10 seconds I will destroy all data on $disk!"
echo "Press CTRL+C to abort!"
sleep 10
diskinfo ${disk} | while read disk sectorsize size sectors other
do
# Delete MBR, GPT Primary, ZFS(L0L1)/other partition table.
/bin/dd if=/dev/zero of=/dev/${disk} bs=${sectorsize} count=8192
# Delete GEOM metadata, GPT Secondary(L2L3).
/bin/dd if=/dev/zero of=/dev/${disk} bs=${sectorsize} oseek=`expr $sectors - 8192` count=8192
done
How to safely remove rest of GPT?
Disk have actual data (part of ZFS), I am don't need to destroy this
data.
GEOM: da6: the primary GPT table is corrupt or invalid.
GEOM: da6: using the secondary instead -- recovery strongly advised.
//
You need to zero out the backup gpt header. Geom locates that header
using (mediasize / sectorsize) - 1. I think mediasize/sectorsize is
exactly what's displayed by diskinfo -v as "mediasize in sectors", so
that number - 1 would be lastsector in:
dd if=/dev/zero of=/dev/da6 bs=<sectorsize> oseek=<lastsector> count=1
//
In case when you have not valid primary header, gpart destroy
will not
touch first two sectors. In you case you can wipe only last sector, like
Ian suggested, but use gpart destroy -F da6
instead of dd. //
You need to use gpart destroy -F
to CORRUPTED GPT, this command will wipe last sector where GPT backup header is located. Since GPT is in CORRUPT state, the primary header
will not be overwrited by this command.
When both primary and backup headers and tables are valid, gpart destroy
overwites PMBR, primary and backup headers.
Set 1000Mbps full-duplex, enter:
ifconfig <interface-name> <IP_address> media 1000baseTX mediaopt full-duplex
For example, set interface em0 with IP 10.10.1.2 to 100Mbps full duplex, enter:
ifconfig em0 10.10.1.2 media 100baseTX mediaopt full-duplex
If the interface is currently forced to 100 full duplex, in order to change to half duplex you must type the following command:
ifconfig em0 10.10.1.2 media 100baseTX -mediaopt full-duplex
The -mediaopt option disable the specified media options (full-duplex) on the interface i.e. go back to half duplex.
Q:
I've written a script to wipe all disks on any machine that netboots. It works fairly well, but I'd like to add verification of the final "zeroing" pass, and only shutdown the machines if the drive reads all zeros successfully, and otherwise show the error. I'm also trying to both minimize dependencies on ports if possible (currently only using pv(1) to show status), and make the final "read" pass perform quickly (the write passes are already fast).
However, I can't figure out what the best way to do this is. It seems the easiest way is to use another port, security/bcwipe, and run it with bcwipe -bfmz /dev/adaX, which both zeros the disks and verifies the write, but this requires another port. I've looked at other, simpler ways of doing using basic system utilities, like od(1), but then the disks read slowly (about 1/3rd of the speed dd(1) will do with bs=1m). I suspect this is because you can't specify a buffer size for od, and it's reading small chunks. //
dd if=/dev/zero of=/dev/adaX bs=1m
A:
In this particular case I would write a tiny C program for this purpose. The C compiler is in the base system, so you don't need to install additional ports.
Code:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BLOCKSIZE (128 * 1024)
uint64_t data[(BLOCKSIZE + 7) / 8];
int
main (void)
{
int inbytes, i;
while ((inbytes = read(STDIN_FILENO, data, BLOCKSIZE)) > 0)
for (i = 0; i < (inbytes + 7) / 8; i++)
if (data[i])
exit (1);
if (inbytes < 0) {
perror (NULL);
exit (2);
}
return 0;
}
Save that source code as “testzero.c”.
Then compile it like this: cc -O2 -o testzero testzero.c
That'll give you a binary named “testzero”. Put it somewhere in your $PATH so your shell can find it, or type “./testzero” to run it from the current directory. The program reads a file (or device) from standard input and exits with code 0 if the file is all zeros, so you can use it in a shell script like this:
Code:
if testzero < /dev/ada0; then
echo "allright, disk is zero"
shutdown -p now
else
echo "something went wrong!"
yes | tr -c x '\a' # call attention
fi
Note that I have set the block size to 128 KB, not 1 MB. In my (limited) testing it was faster with 128 KB (actually, as fast as the speed of the physical disk). This may depend on your kernel settings, CPU cache or file system parameters, though. YMMV, so you might want to test different values. If you change it in the source code, don't forget to recompile the program. //
For simplicity, the program reads the file or device from standard input, so there is no reason to parse the argument vector, so I just specified it as “void” instead of the usual “int argc, char *argv[]”. Besides, the compiler issues a warning if you specify argc and argv without actually using them. Note that you could omit the formal parameter completely (i.e. “()” instead of “(void)”), but I think it's a good habit to specify “void” for documentation purposes, so the author's intention is clear. For similar reasons the return type is specified as “int” – actually that wouldn't be necessary because int is the default return type (not void!). In other words: You can just write “main ()” instead of “int main (void)” if you want – both mean exactly the same, but I adhere to the Python motto “explicit is better than implicit”.
Just in case someone wonders: There is no difference (performance-wise) between using standard input vs. opening a file specified on the command line. Just make sure that the shell opens the file directly and passes it to the program (using redirection syntax with “<”). Do not use something like “cat /dev/foo | testzero” because this will create an additional process for the cat command and connect it with a pipe to the testzero command – the pipe may reduce the performance considerably. (NB: The cat command is often abused; I guess that 99% of uses of cat in shell scripts are superfluous, sometimes even harmful.) //
I made a few modifications to his code to report where the error lies if it finds one, but otherwise this does appear to read at the maximum disk speed. Thanks!
Edit: If anyone is interested here's the code after the changes I made. It's also a bit more verbose.
C:
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <unistd.h>
#define BLOCKSIZE (128 * 1024)
uint64_t data[(BLOCKSIZE + 7) / 8];
int main(void) {
int inbytes;
uint64_t intotal = 0;
while((inbytes = read(STDIN_FILENO, data, BLOCKSIZE)) > 0) {
inbytes = (inbytes + 7) >> 3;
for(int i = 0; i < inbytes; i++) {
if(data[i]) {
intotal = (intotal + i) << 3; // Convert back to byte offset
printf("Non-zero byte detected at offset range: %lu to %lu\n", intotal, intotal + 7);
exit(1);
}
}
intotal += inbytes;
}
if(inbytes < 0) {
perror(NULL);
exit(2);
}
printf("Disk is fully zeroed.\n");
return 0;
}
smartctl - Control and Monitor Utility for SMART Disks
SYNOPSIS
smartctl [options] device
DESCRIPTION
smartctl controls the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into most ATA/SATA and SCSI/SAS hard drives and solid-state drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests. smartctl also supports some features not related to SMART. This version of smartctl is compatible with ACS-3, ACS-2, ATA8-ACS, ATA/ATAPI-7 and earlier standards (see REFERENCES below).
TestDisk is OpenSource software and is licensed under the terms of the GNU General Public License (GPL v2+).
TestDisk is powerful free data recovery software! It was primarily designed to help recover lost partitions and/or make non-booting disks bootable again when these symptoms are caused by faulty software: certain types of viruses or human error (such as accidentally deleting a Partition Table). Partition table recovery using TestDisk is really easy.
This is how many modern file system backup programs work. On day 1 you make an rsync copy of your entire file system:
backup@backup_server> DAY1=`date +%Y%m%d%H%M%S`
backup@backup_server> rsync -av -e ssh earl@192.168.1.20:/home/earl/ /var/backups/$DAY1/
On day 2 you make a hard link copy of the backup, then a fresh rsync:
backup@backup_server> DAY2=`date +%Y%m%d%H%M%S`
backup@backup_server> cp -al /var/backups/$DAY1 /var/backups/$DAY2
backup@backup_server> rsync -av -e ssh --delete earl@192.168.1.20:/home/earl/ /var/backups/$DAY2/
“cp -al” makes a hard link copy of the entire /home/earl/ directory structure from the previous day, then rsync runs against the copy of the tree. If a file remains unchanged then rsync does nothing — the file remains a hard link. However, if the file’s contents changed, then rsync will create a new copy of the file in the target directory. If a file was deleted from /home/earl then rsync deletes the hard link from that day’s copy.
In this way, the $DAY1 directory has a snapshot of the /home/earl tree as it existed on day 1, and the $DAY2 directory has a snapshot of the /home/earl tree as it existed on day 2, but only the files that changed take up additional disk space. If you need to find a file as it existed at some point in time you can look at that day’s tree. If you need to restore yesterday’s backup you can rsync the tree from yesterday, but you don’t have to store a copy of all of the data from each day, you only use additional disk space for files that changed or were added.
I use this technique to keep 90 daily backups of a 500GB file system on a 1TB drive.
One caveat: The hard links do use up inodes. If you’re using a file system such as ext3, which has a set number of inodes, you should allocate extra inodes on the backup volume when you create it. If you’re using a file system that can dynamically add inodes, such as ext4, zfs or btrfs, then you don’t need to worry about this.
A swap is nothing but space or file on a disk that can used as virtual memory. In FreeBSD and Unix-like operating systems, it is common to use a whole partition of a hard disk for swapping. When a FreeBSD based server runs out of memory, the kernel can move sleeping or inactive processes into swap area. A dedicated Swap partition goes a long way to avoid system freeze but if you notice you are running out of RAM or your applications are consuming too much of it then you may want to setup a swapfile. This guide helps you add a swap space on FreeBSD based Unix server.
why you shouldn't enable auto reboot after a panic.
40,000 lines of flawed code almost made it into FreeBSD's kernel—we examine how.
Find files Based On their Permissions
The typical syntax to find files based on their permissions is:
$ find -perm mode
The MODE can be either with numeric or octal permission (like 777, 666.. etc) or symbolic permission (like u=x, a=r+x).
We can specify the MODE in three different ways as listed below.
- If we specify the mode without any prefixes, it will find files of exact permissions.
- If we use "-" prefix with mode, at least the files should have the given permission, not the exact permission.
-
If we use "/" prefix, either the owner, the group, or other should have permission to the file. ///
find . -not -perm -g=r