5331 private links
Arguments passed to a script are processed in the same order in which they’re sent. The indexing of the arguments starts at one, and the first argument can be accessed inside the script using $1. Similarly, the second argument can be accessed using $2, and so on. The positional parameter refers to this representation of the arguments using their position. //
Using flags is a common way of passing input to a script. When passing input to the script, there’s a flag (usually a single letter) starting with a hyphen (-) before each argument.
Let’s take a look at the userReg-flags.sh script, which takes three arguments: username (-u), age (-a), and full name (-f).
We’ll modify the earlier script to use flags instead of relying on positional parameters. The getopts function reads the flags in the input, and OPTARG refers to the corresponding values:
This repository contains scripts to set up my configuration, tools and environment on all of my computers. The scope is limited to tools and configuration specific to my user account; I maintain separate provisioning for fixed-purpose computers.
With it, I can be productive within 10 minutes of encountering a new or re-installed PC.
[Callan] built a script which runs on every new server he spins up which selects two random colors, checks that they contrast well with each other, don’t create problems for the colorblind, and then applies them to the bash prompt.
Make yourself a temporary alias for the SD card, test it, then use that so you can’t accidentally get the wrong device. e.g.
MY_SD=/dev/sdc1
ls $MY_SD
dd if=disk.img of=$MY_SD
In this example “ls” is not really a good test, but I’m on Mac so I’m not sure if “diskutil info $MY_SD” would work for you, but sub in any command that you feel gives the confidence that MY_SD points in the right direction. //
If you´re always using the same removable medium, it´s easy to set an udev rule that match this particular device and symlinks (or even rename) it to /dev/whatyouwant
Or even more globally match any USB attached mass storage, for example /dev/sdxn becomes /dev/extUSBxn if you want to.
sed, a stream editor
To override a non-builtin with a function, use command
. For example:
ls() { command ls -l; }
which is the equivalent of alias ls='ls -l'
.
command works with builtins
as well. So, your cd
could also be written as:
cd() { echo before; command cd "$1"; echo after; }
To bypass a function or an alias and run the original command or builtin, you can put a \
at the beginning:
\ls # bypasses the function and executes /bin/ls directly
rm typically does not delete the targets of symlinks, but to say that it "does not follow symlinks" is not quite accurate, at least on my system (GNU coreutils 8.25). And deleting files is a place where accuracy is pretty important! Let's take a look at how it behaves in a few situations.
If your symlink is to a file, rather than to a directory, there is no plausible way to accidentally delete the target using rm. You would have to do something very explicit like rm "$(readlink file)".
Symlinks to directories, however, get a bit dicey, as you saw when you accidentally deleted one.
These are all safe:
rm test2
(deletes the symlink only)rm -r test2
(deletes the symlink only)rm -rf test2
(deletes the symlink only)rm test2/
(rm: cannot remove 'test2/'
: Is a directory -- no action taken)rm -rf *2
(or any other glob matching the symlink -- deletes the symlink only)
These are not safe:
rm -r test2/
(rm: cannot remove 'test2/'
: Not a directory -- but deletes the contents of thetest1
directory)rm -rf test2/
(deletes the contents of the directory, leaves the symlink, no error)rm -rf test2/*
(deletes the contents of the directory, leaves the symlink, no error)
The last unsafe case is probably obvious behavior, at least to someone well-acquainted with the shell, but the two before it are quite a bit more subtle and dangerous, especially since tab-completing the name of test2
will drop the trailing slash in for you!
It's interesting to note that test
has similar behavior, considering a symlink to a directory with a trailing slash to be not a symlink but a directory, while a symlink without a trailing slash is both:
rsync is fast and easy:
rsync -av --progress sourcefolder /destinationfolder --exclude thefoldertoexclude
You can use --exclude
multiples times.
Note that the dir thefoldertoexclude
after --exclude
option is relative to the sourcefolder
, i.e., sourcefolder/thefoldertoexclude
.
Also you can add -n
for dry run to see what will be copied before performing real operation, and if everything is ok, remove -n
from command line.
Environment variables are named strings available to all applications. Variables are used to adapt each application's behavior to the environment it is running in. You might define paths for files, language options, and so on. You can see each application's manual to see what variables are used by that application. //
To see your currently defined variables, open up your terminal and type the env
command.
If you are using other Linux distribution such as Debian / Ubuntu / Suse / Slackware Linux etc., try the following generic procedure. First, save the current firewall rules, type:
iptables-save > /root/firewall.rules
OR
sudo iptables-save > /root/firewall.rules
Next, type the following commands (login as the root) as bash prompt:
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Or create a shell script as follows and run it to disable the firewall:
The quick and simple editor for cron schedule expressions by Cronitor
The preferred method to check your Debian version is to use the lsb_release utility which displays LSB (Linux Standard Base) information about the Linux distribution. This method will work no matter which desktop environment or Debian version you are running.
lsb_release -a
Checking Debian Version using the /etc/issue file
The following cat command will display the contents of the /etc/issue which contains a system identification text:
cat /etc/issue
Checking Debian Version using the /etc/os-release file
/etc/os-release is a file which contains operating system identification data, and can be found only on the newer Debian distributions running systemd.
This method will work only if you have Debian 9 or newer:
cat /etc/os-release
Checking Debian Version using the hostnamectl command
hostnamectl is a command that allows you to set the hostname but you can also use it to check your Debian version.
This command will work only on Debian 9 or newer versions:
hostnamectl
I made this monstrosity and it's mostly useless.
I have a few boxes of old optical media that I've been wanting to transfer to my NAS for years. I've dreaded the labor it takes to transfer so many disks so I had an idea to make a big stack of super cheap upcycled CD/DVD/BR drives connected to a 16x powered USB3.0 hub. My naive theory was that parallelizing this task would make it easier.
Boy I was wrong! It turns out that it does not reduce the labor involved, quite the opposite actually. I have no idea which drive is which and disk reading fails frequently due to disk damage. I dunno maybe some drives are busted too. It's pure chaos. Moreover, these drives are pretty fast when the ripping process works I don't have the time to use more 5-6 drives simultaneously. I guess I was expecting it to take much much longer.
So I don't know what to do with this contraption now.
Edit: yes, that is duct tape.
The A.R.M. (Automatic Ripping Machine) detects the insertion of an optical disc, identifies the type of media, and autonomously performs the appropriate action:
- DVD / Blu-ray -> Rip with MakeMKV and Transcode with Handbrake
- Audio CD -> Rip and Encode to FLAC and Tag the files if possible.
- Data Disc -> Make an ISO backup
It runs on Linux, it’s completely headless and fully automatic requiring no interaction or manual input to complete its tasks (other than inserting the disk). Once it completes a rip it ejects the disc for you and you can pop in another one.
Rsync, or Remote Sync, is a free command-line tool that lets you transfer files and directories to local and remote destinations. Rsync is used for mirroring, performing backups, or migrating data to other servers.
This tool is fast and efficient, copying only the changes from the source and offering customization options.
Follow this tutorial to learn how to use rsync with 20 command examples to cover most use-cases in Linux. //
Note: Be careful how you use the trailing slash in the source path when syncing directories. The trail plays an important role. If you enter the trailing slash on the source, the rsync command does not create the source folder on the destination; it only copies the directory's files. When you do not use the trailing slash, rsync also creates the original directory inside the destination directory.
“The open-source ecosystem is one of the grandest enterprises in human history,” says Sergey Bratus, the DARPA program manager behind the project.
“It’s now grown from enthusiasts to a global endeavor forming the basis of global infrastructure, of the internet itself, of critical industries and mission-critical systems pretty much everywhere,” he says. “The systems that run our industry, power grids, shipping, transportation.”
Threats to open source
Much of modern civilization now depends on an ever-expanding corpus of open-source code because it saves money, attracts talent, and makes a lot of work easier. //
But while the open-source movement has spawned a colossal ecosystem that we all depend on, we do not fully understand it, experts like Aitel argue. There are countless software projects, millions of lines of code, numerous mailing lists and forums, and an ocean of contributors whose identities and motivation are often obscure, making it hard to hold them accountable.
That can be dangerous. For example, hackers have quietly inserted malicious code into open-source projects numerous times in recent years. Back doors can long escape detection, and, in the worst case, entire projects have been handed over to bad actors who take advantage of the trust people place in open-source communities and code. Sometimes there are disruptions or even takeovers of the very social networks that these projects depend on. Tracking it all has been mostly—though not entirely—a manual effort, which means it does not match the astronomical size of the problem. //
The researchers want insight into what kinds of events and behavior can disrupt or hurt open-source communities, which members are trustworthy, and whether there are particular groups that justify extra vigilance. These answers are necessarily subjective. But right now there are few ways to find them at all.
Experts are worried that blind spots about the people who run open-source software make the whole edifice ripe for potential manipulation and attacks. For Bratus, the primary threat is the prospect of “untrustworthy code” running America’s critical infrastructure—a situation that could invite unwelcome surprises. //
Margin’s work maps out who is working on what specific parts of open-source projects. For example, Huawei is currently the biggest contributor to the Linux kernel. Another contributor works for Positive Technologies, a Russian cybersecurity firm that—like Huawei—has been sanctioned by the US government, says Aitel. Margin has also mapped code written by NSA employees, many of whom participate in different open-source projects.
“This subject kills me,” says d’Antoine of the quest to better understand the open-source movement, “because, honestly, even the most simple things seem so novel to so many important people. The government is only just realizing that our critical infrastructure is running code that could be literally being written by sanctioned entities. Right now.”
This kind of research also aims to find underinvestment—that is critical software run entirely by one or two volunteers. It’s more common than you might think—so common that one common way software projects currently measure risk is the “bus factor”: Does this whole project fall apart if just one person gets hit by a bus? //
The hope is that greater understanding will make it easier to prevent a future disaster, whether it’s caused by malicious activity or not.
Find a List of Logged In Users
You can use the "who" command to find a list of users currently logged into the system. Using the -u (--users) option will also display the PID (process ID) of the users shell session.
End the Users Shell Process
When you are ready to kick the user, send the SIGHUP to the users shell process. A SIGHUP signal is the same signal the process would receive when the controlling terminal is closed.
sudo kill -HUP 9940
The number at the end of the above command is the process ID of the users shell. We found the process ID using the who command above.
Sometimes there is a process that hangs. In that case we can send a SIGKILL (kill -9) to the PID. This is exactly what it sounds like. It will immediately terminate ANY process, so be careful.
sudo kill -9 9940
This project is a lightweight authentication server that provides an opinionated, simplified LDAP interface for authentication. It integrates with many backends, from KeyCloak to Authelia to Nextcloud and more!
The goal is not to provide a full LDAP server; if you're interested in that, check out OpenLDAP. This server is a user management system that is:
- simple to setup (no messing around with slapd),
- simple to manage (friendly web UI),
- low resources,
- opinionated with basic defaults so you don't have to understand the subtleties of LDAP.
It mostly targets self-hosting servers, with open-source components like Nextcloud, Airsonic and so on that only support LDAP as a source of external authentication.
For more features (OAuth/OpenID support, reverse proxy, ...) you can install other components (KeyCloak, Authelia, ...) using this server as the source of truth for users, via LDAP.
First, a mini-primer on iptables.
iptables is both a command and the name of the Linux firewall subsystem. The command is used to set up firewall rules in RAM. The iptables firewall rules are arranged first into tables: there is the default filter table, but also nat, mangle, raw and security tables, for various purposes. fail2ban is doing traffic filtering, so it uses the filter table.
The tables are then further divisible into filter chains. Each table has certain standard chains: for the filter table, the standard chains are INPUT, FORWARD and OUTPUT. The FORWARD chain is only used when the system is configured to route traffic for other systems. The INPUT chain deals with incoming traffic to this system.
If fail2ban added its rules directly to the INPUT chain and wiped that chain clean when all the bans expired, then you would have to turn over full control of your firewall input rules to fail2ban - you could not easily have any custom firewall rules in addition to what fail2ban does. This is clearly not desirable, so fail2ban won't do that.
Instead, fail2ban creates its own filter chain it can fully manage on its own, and adds on start-up a single rule to the INPUT chain to send any matching traffic to be processed through fail2ban's chain.
For example, when configured to protect sshd, fail2ban should be executing these commands at start-up:
iptables -N f2b-sshd
iptables -A f2b-sshd -j RETURN
iptables -I INPUT -p tcp -m multiport --dports <TCP ports configured for sshd protection> -j f2b-sshd
These commands create a f2b-sshd filter chain, set RETURN as its last rule (so that when any fail2ban rules have been processed, the normal processing of INPUT rules will continue as without fail2ban, and finally, add a rule to the beginning of the INPUT table to catch any SSH traffic and send it first to the f2b-sshd chain.
Now, when fail2ban needs to ban an IP address for SSH use, it will just insert a new rule to the f2b-sshd chain.
If you are using firewalld or some other system that manages iptables firewall rules for you, or if you clear all the iptables rules manually, then these initial rules, and possibly the entire f2b-sshd filter chain, may be wiped out. You should make sure that any firewall management tool you might be using maintains that initial rule in the INPUT chain and doesn't touch the f2b-sshd chain at all.
I know this is an old thread but this is what pops up on a google search for this subject. I didn't see anyone give the most correct answer (imo) so here it is.
To change the Linux named port definition globally go to
/etc/services
ssh 22/tcp
ssh 22/udp
There is no need to change anything in the fail2ban configuration or in any other application that uses Linux named ports.