5333 private links
DCRoss
Ars Scholae Palatinae
11y
960
Yesterday at 11:36 AM
#24
MTSkibum said:
Somewhere a web developer chose an arbitrary nvarchar length for the password and is storing it unencrypted in a sql database.That is how you ended up with the maximum password length.
There's more to the story, but the relevant part is that way back in 1976 UNIX systems hashed passwords with a DES based algorithm which was limited to two characters of salt and eight characters of password. It wasn't until 1994 that Paul Henning-Kemp replaced this with a more advanced hash based on MD5 for FreeBSD, and this was adopted by just about everybody. However, not only did applications keep using the old crypt(3) implementation long after that, they also stuck with the idea that having an eight character limit on your password was reasonable, and even that if you used a more secure algorithm that sixteen was fair.
With this in mind, setting fixed length fields for passwords or password hashes was considered acceptable for far longer than it should have been.
What is mtree?
mtree(8) is a utility included in the base system (/usr/sbin/mtree) and can be used to compare two directory structures thus allowing you to spot any kind of difference. By default it does this by comparing file size (in bytes) and type, last modification time, file owner, group owner, the permissions as a numeric value, any optional flags (see ls -lo, and also chflags(1)) and finally any optional soft or hard links the file might have.
But there's a whole lot more you can do here.
Why I'm writing this guide (short editorial section)
The main reason I'm writing this guide is because I think not many people use mtree to its full potential. But to make matters worse I also think the manual page doesn't do a good job. I mean... If all you do in the EXAMPLES section is to point people to some parameters without actually showing them any examples on how to use those...
On Unix-like operating systems, the diff command analyzes two files and prints the lines that are different. In essence, it outputs a set of instructions for how to change one file to make it identical to the second file.
This page covers the GNU/Linux version of diff.
Want to have crontab use the editor of your choice instead of the other way around? This tutorial shows you how. These instructions will work with Linux, macOS and other Unix-like operating systems. //
select-editor
or
echo export VISUAL="nano" >> ~/.bash_profile
source ~/.bash_profile
or
. ~/.bash_profile
In this article i will show the format of a crontab and explain how to schedule a cron job in Linux.
You will also find here the most popular examples of cron job schedules, such as every minute cron job, every 5 minutes, every hour, every day (daily cron job) and others.
Q:
I often see tutorials online that connect various commands with different symbols. For example:
command1 | command2
command1 & command2
command1 || command2
command1 && command2
Others seem to be connecting commands to files:
command1 > file1
command1 >> file1
What are these things? What are they called? What do they do? Are there more of them?
A:
These are called shell operators and yes, there are more of them. I will give a brief overview of the most common among the two major classes, control operators and redirection operators, and how they work with respect to the bash shell.
Note that all of these are operators, not commands:
- && — this is a logical AND and is used to chain multiple commands; commands to the right of the operator are executed if the command to the left succeeds
- || — this is the logical OR and is used to chain multiple commands; commands to the right of the operator are executed if the command to the left fails
- / — not a command, not an operator. Period.
- ; — this is a command delineator or separator; you use it to separate commands where you want all of them to execute
- {} (and () ) — operators that do a bunch of stuff; I tend to use () for grouping commands into one “unit”, and use {} for its command line expansions (a bit complicated to go into right here)
To summarize (non-exhaustively) bash's command operators/separators:
- | pipes (pipelines) the standard output (stdout) of one command into the standard input of another one. Note that stderr still goes into its default destination, whatever that happen to be.
- |&pipes both stdout and stderr of one command into the standard input of another one. Very useful, available in bash version 4 and above.
- && executes the right-hand command of && only if the previous one succeeded.
- || executes the right-hand command of || only it the previous one failed.
- ; executes the right-hand command of ; always regardless whether the previous command succeeded or failed. Unless set -e was previously invoked, which causes bash to fail on an error.
If you use the ls -li command (the -i option shows the inode number), you’ll see that its link count is 2. The link count is after the file permission field. //
You should not create a hard link to a directory
You can create a soft link to a directory but when you try to create a hard link to a directory, you’ll see an error like this:
ln: newdir/test_dir: hard link not allowed for directory
Why are hard links not allowed for directory? It’s because using hard links for directory may break the filesystem. Theoretically, you can create hard link to directories using -d or -F option. But most Linux distributions won’t allow that even if you are root user.
https://askubuntu.com/questions/210741/why-are-hard-links-not-allowed-for-directories
//
Bonus Tip: How to find all hard links to a given file
If you see that a file has more than one link count, you may get curious about the other hard links associated with it.
One way to find that is using the inode number of the file. You can use the ls -i command or the stat command to get the inode number.
Once you have the inode number, you can see all the links associated with it using the find command.
find . -inum inode_number
smartctl - Control and Monitor Utility for SMART Disks
SYNOPSIS
smartctl [options] device
DESCRIPTION
smartctl controls the Self-Monitoring, Analysis and Reporting Technology (SMART) system built into most ATA/SATA and SCSI/SAS hard drives and solid-state drives. The purpose of SMART is to monitor the reliability of the hard drive and predict drive failures, and to carry out different types of drive self-tests. smartctl also supports some features not related to SMART. This version of smartctl is compatible with ACS-3, ACS-2, ATA8-ACS, ATA/ATAPI-7 and earlier standards (see REFERENCES below).
In the end, you want something like this:
for i in {1..100}; do cp test.ogg "test$i.ogg"; done
Or, as an alternative
i=0
while (( i++ < 100 )); do
cp test.ogg "test$i.ogg"
done
openssl rand -out sample.txt -base64 805306368
Alternatively, you could use /dev/urandom, but it would be a little slower than OpenSSL:
dd if=/dev/urandom of=sample.txt bs=1G count=1
Personally, I would use bs=64M count=16 or similar:
dd if=/dev/urandom of=sample.txt bs=64M count=16
//
Since, your goal is to create a 1GB file with random content, you could also use yes command instead of dd:
yes [text or string] | head -c [size of file] > [name of file]
Sample usage:
yes 'this is test file' | head -c 100KB > test.file
It is the late 1990s and the computer server world is dominated by enterprise UNIX operating systems – all competing with each other. Windows 2000 is not out yet and Windows NT 4 is essentially a toy that lesser mortals run on their Intel PCs which they laughingly call ‘servers’. Your company has a commercial UNIX and its called Solaris. Your UNIX is very popular and is a leading platform. Your UNIX however has some major deficiencies when it comes to storage.
IRIX – a competing proprietary UNIX – has the fantastic XFS file system which vastly out performs your own file system which is still UFS (“Unix File System” – originally developed in the early 1980s) and doesn’t even have journalling – until Solaris 7 at least (in November 1998). IRIX had XFS baked into it from 1994. IRIX also had a great volume manager – where as Solaris’ ‘SVM’ was generally regarded as terrible and was an add-on product that didn’t appear as part of Solaris itself until Solaris 8 in 2000. //
ZFS – and sadly btrfs – are both rooted in a 1990s monolithic model of servers and storage. btrfs hasn’t caught on in Linux for a variety of reasons, but most of all its because it simply isn’t needed. XFS runs rings around both in terms of performance, scales to massive volume sizes. LVM supports XFS by adding COW snapshots and clones, and even clustering if you so wanted. I believe the interesting direction in file systems is actually things like Gluster and Ceph – file systems designed with the future in mind, rather than for a server model we’re not running any more. ///
Interesting to compare the comments to the disparaging statements in the article.
ZFS combines hardware and software layers, combines volume, disk & partition management in one application.
It is the only production ready journaled CoW file system with data integrity management.
Btrfs is not production ready.
Find files Based On their Permissions
The typical syntax to find files based on their permissions is:
$ find -perm mode
The MODE can be either with numeric or octal permission (like 777, 666.. etc) or symbolic permission (like u=x, a=r+x).
We can specify the MODE in three different ways as listed below.
- If we specify the mode without any prefixes, it will find files of exact permissions.
- If we use "-" prefix with mode, at least the files should have the given permission, not the exact permission.
-
If we use "/" prefix, either the owner, the group, or other should have permission to the file. ///
find . -not -perm -g=r
Count lines in a file:
# wc -l file.txt
If wc
not installed, line count with only bash
:
# LINECT=0; while read -r LINE; do (( LINECT++ )); done < file.txt; echo $LINECT
Convert to all lowercase:
for file in *.txt; do mv "$file" "${file,,}"; done
Convert First letter to lowercase:
for file in *.txt; do mv "$file" "${file,}"; done
Convert to all uppercase:
for file in *.txt; do mv "$file" "${file^^}"; done
for f in \ ; do mv "$f" "${f// /_}"; done
Though it's not recursive, it's quite fast and simple. I'm sure someone here could update it to be recursive.
The ''${f// /_}'' part utilizes bash's parameter expansion mechanism to replace a pattern within a parameter with supplied string. The relevant syntax is ''${parameter/pattern/string}''. See: https://www.gnu.org/software/bash/manual/html_node/Shell-Parameter-Expansion.html or http://wiki.bash-hackers.org/syntax/pe .
One core functionality of Bash is to manage parameters. A parameter is an entity that stores values and is referenced by a name, a number or a special symbol.
- parameters referenced by a name are called variables (this also applies to arrays)
- parameters referenced by a number are called positional parameters and reflect the arguments given to a shell
- parameters referenced by a special symbol are auto-set parameters that have different special meanings and uses
Parameter expansion is the procedure to get the value from the referenced entity, like expanding a variable to print its value. On expansion time you can do very nasty things with the parameter or its value. These things are described here.