5333 private links
Rawhide (rh(1)) lets you search for files on the command line using expressions and user-defined functions in a mini-language inspired by C. It's like find(1), but more fun to use.
But, what if you really want to be really precise on the command? Using the above example, not only running rsync but also specifying the path and the arguments? You could cheat and find what the command you are sending is supposed to look like by replacing (temporarily) your wrapper script with this
#!/bin/sh
DEBUG="logger" # Linux
#DEBUG="syslog -s -l note" # OSX
if [ -n "$SSH_ORIGINAL_COMMAND" ]; then
$DEBUG "Passed SSH command $SSH_ORIGINAL_COMMAND"
elif [ -n "$SSH2_ORIGINAL_COMMAND" ]; then
$DEBUG "Passed SSH2 command $SSH2_ORIGINAL_COMMAND"
else
$DEBUG Not passed a command.
fi
Then you run the ssh command and see what it looks like in the log file. Copy that to your original wrapper script, and you are good to go. So
ssh -t -i /home/raub/.ssh/le_key raub@virtualpork echo "Hey"
Results in
Dec 26 13:34:05 virtualpork syslog[64541]: Passed SSH command echo Hey
While
rsync -avz -e 'ssh -i /home/raub/.ssh/le_key' raub@virtualpork:Public /tmp/backup/
results in
Dec 26 13:28:17 virtualpork syslog[64541]: Passed SSH command rsync --server --sender -vlogDtprze.iLs . Public
The latter meaning our little wrapper script would then look like
#!/bin/sh
case $SSH_ORIGINAL_COMMAND in
"rsync --server --sender -vlogDtprze.iLs . Public")
$SSH_ORIGINAL_COMMAND
;;
*)
echo "Permission denied."
exit 1
;;
esac
///
find command:
grep "Passed SSH command" /var/log/syslog
lsattr -aR .//. | sed -rn '/i.+\.\/\/\./s/\.\/\///p'
lsattr -Ra 2>/dev/null /|awk '$1 ~ /i/ && $1 !~ /^\// {print}'
Change i
to d
to find "nodump" attribute/flag
FreeBSD:
find . -flags +nodump
GNU Rush is a Restricted User Shell, designed for sites providing limited remote access to their resources, such as, for example, savannah.gnu.org. Its main program, rush, is configured as a user login shell for users that are allowed only remote access to the machine.
To combine stderr
and stdout
into the stdout
stream, we append this to a command:
2>&1
e.g. to see the first few errors from compiling g++ main.cpp
:
g++ main.cpp 2>&1 | head
What does 2>&1
mean, in detail?
File descriptor 1 is the standard output (stdout
).
File descriptor 2 is the standard error (stderr
).
At first, 2>1
may look like a good way to redirect stderr
to stdout
. However, it will actually be interpreted as "redirect stderr
to a file named 1
".
&
indicates that what follows and precedes is a file descriptor, and not a filename. Thus, we use 2>&1
. Consider >&
to be a redirect merger operator.
This guide is meant to take you step-by-step through the creation of a complete Standard Ebook. While it might seem a little long, most of the text is a description of how to use various automated scripts. It can take just an hour or two for an experienced producer to produce a draft ebook for proofreading (depending on the complexity of the ebook, of course).
Our toolset is GNU/Linux-based, and producing an ebook from scratch currently requires working knowledge of the epub file format and of Unix-like systems like Mac or Linux.
Our toolset doesn’t yet work natively on Windows, but there are many ways to run Linux from within Windows, including one that is directly supported by Microsoft themselves.
Your seamless Hypervisor
XCP-ng: the user-friendly, high-performance virtualization solution, developed collaboratively for unrestricted features and open-source accessibility.
find . -iname "foo*" | while read f
do
# ... loop body
done
Alternate:
$ for x in *; do echo "file: '${x}'"; done
or
for x in *
do
echo "file: '${x}'"
done
Many times when writing Shell scripts, you may find yourself in a situation where you need to perform an action based on whether a file exists or not.
In Bash, you can use the test command to check whether a file exists and determine the type of the file.
The test command takes one of the following syntax forms:
test EXPRESSION
[ EXPRESSION ]
[[ EXPRESSION ]]
If you want your script to be portable, you should prefer using the old test [ command, which is available on all POSIX shells. The new upgraded version of the test command [[ (double brackets) is supported on most modern systems using Bash, Zsh, and Ksh as a default shell.
There are an awful lot of people who feel that simply because this is Linux, they have some kind of right to get it for free. Unfortunately, they don't.
That is not what the "free" in Free Software means, and it never was. Red Hat puts an enormous amount of work into developing Free Software, into making sure its code makes its way back upstream, and into producing safe, secure, and long-term stable supported versions of inherently rapidly changing FOSS software, aimed primarily at large enterprise customers. //
And perhaps the clearest sign that it's not really interested in dealing with small users and small customers is that it continues to make the product available free of charge for those who only want up to 16 servers. //
There are a host – pun intended – of other distros out there if you don't want to pay for your Linux. If you are happy to pay but you feel aggrieved with IBM or Red Hat, both Canonical and SUSE will be happy to take your money and provide you with enterprise-level support, and both of them let you get and use a version of their enterprise OS entirely free of charge.
Welcome back to our mini-series on square brackets. In the previous article, we looked at various ways square brackets are used at the command line, including globbing. If you’ve not read that article, you might want to start there.
Square brackets can also be used as a command. Yep, for example, in:
[ "a" = "a" ]
which is, by the way, a valid command that you can execute, [ ... ] is a command. Notice that there are spaces between the opening bracket [ and the parameters "a" = "a", and then between the parameters and the closing bracket ]. That is precisely because the brackets here act as a command, and you are separating the command from its parameters.
You would read the above line as “test whether the string “a” is the same as string “a”“. If the premise is true, the [ ... ] command finishes with an exit status of 0. If not, the exit status is 1.
This script is the official tool for converting a CentOS 7 server with Plesk to AlmaLinux 8. It uses the AlmaLinux ELevate tool, which is based on the leapp modernization framework. The script includes additional repository and configuration support provided by Plesk.
In this article, we'll show you how to customize the Logical Volume on a Dedicated Server.
The Logical Volume Manager (LVM) is used to manage the storage space on Linux dedicated servers that were purchased either as part of a server deal or before October 20, 2021. If you create or have created one of these dedicated servers with an IONOS image, the entire storage space of the hard disk(s) is not partitioned when this server is made available for you to use, which allows you to distribute the storage space individually.
Does anyone have a bash script that will email or notify someone in the case of a successful login to a ssh server? I want to be notified if anyone logs into my personal box.
I'm using Ubuntu 12.04
For directories read permission means that the user may see the contents of a directory. Write permission means that a user may create files or folders in the directory. Execute permission means that the user may enter the directory.
On a Linux system, when changing the ownership of a symbolic link using chown, by default it changes the target of the symbolic link (ie, whatever the symbolic link is pointing to).
If you'd like to change ownership of the link itself, you need to use the -h option to chown:
-h, --no-dereference affect each symbolic link instead of any referenced file (useful only on systems that can change the ownership of a symlink)
But if you are over a slow network or with huge files, it would be nice to have a progress bar. Sure, you could write your own version of copy, but wouldn’t it be nice to have some more generic options?
ONE WAY
The pv program can do some of the things you want. It monitors data through a pipe or, at least through its standard output. Think of it as cat with a meter. //
There is also progress. It looks around for programs running like cp, mv, dd, and many more, looks at their open files, and shows you progress information for those programs. It only does this once, so you’ll typically marry it with the watch command or use the -M option. //
If you want to add a progress bar to your shell scripts directly, try gum
So, it’s not a good practice to allow direct root login via SSH session and recommend to create non root accounts with sudo access. Whenever root access needed, first logged in as normal user and then use su to switch over to root user. To disable direct SSH root logins, follow our below article that shows how to disable and limit root login in SSH.
Disable SSH Root Login and Limit SSH Access
However, this guide shows a simple way to know when someone logged in as root or normal user it should send an email alert notification to the specified email address along with the IP address of last login. //
echo 'ALERT - Root Shell Access (server) on:' `date` `who` | mail -s "Alert: Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" name@example.com
To exclude multiple files or directories simply specify multiple --exclude options:
rsync -a --exclude 'file1.txt' --exclude 'dir1/*' --exclude 'dir2' src_directory/ dst_directory/
If you prefer to use a single --exclude option you can list the files and directories you want to exclude in curly braces {} separated by a comma as shown below:
rsync -a --exclude={'file1.txt','dir1/*','dir2'} src_directory/ dst_directory/
If the number of the files and/or directories you want to exclude is large, instead of using multiple --exclude options you can specify the files and directories you want to exclude in a file and pass the file to the --exclude-from option.
iptables-save > /root/firewall_rules.backup
For older Linux kernels you have an option of stopping service iptables with service iptables stop
but if you are on the new kernel, you just need to wipe out all the policies and allow all traffic through the firewall. This is as good as you are stopping the firewall.
Use below list of commands to do that.
iptables -F
iptables -X
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
iptables -P FORWARD ACCEPT
Where –
-F: Flush all policy chains
-X: Delete user-defined chains
-P INPUT/OUTPUT/FORWARD: Accept specified traffic
Once done, check current firewall policies. It should look like below which means everything is accepted (as good as your firewall is disabled/stopped)
iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination