5331 private links
BorgBackup (short: Borg) is a deduplicating backup program. Compression and authenticated encryption are also supported as options.
Borg's main goal is to provide an efficient and secure backup solution. Thanks to deduplication, the backup process with Borg is very fast and makes Borg very interesting for daily backups. You may notice that Borg is significantly quicker than some other methods, depending on the amount of data and the number of changes you need to back up. With Borg, all data is already encrypted on the client side, which makes Borg a good choice for hosted systems.
borgmatic is simple, configuration-driven backup software for servers and workstations. Backup all of your machines from the command-line or scheduled jobs. No GUI required. Built atop Borg Backup, borgmatic initiates a backup, prunes any old backups according to a retention policy, and validates backups for consistency. borgmatic supports specifying your settings in a declarative configuration file, rather than having to put them all on the command-line, and handles common errors.
Borg Backup is a Linux command-line utility to create backups of your computers. It's de-duplication and speed can't be beat. However, when you are dealing with a large number of machines to backup up, it quickly becomes obvious that you don't have a good way to manage all your client machines from a single place. It's also time-consuming and tedious to setup large number of machines. So, we've created Borg Backup Server.
Borg Backup Server makes it easy to install and maintain Borg on each client machine from a single server GUI (Graphical User Interface). Some of the powerful features include:
Rclone - rsync for cloud storage
Rclone is a command line program to sync files and directories to and from:
- Amazon S3
- Backblaze B2
- Owncloud
.... - Wasabi
Features
- MD5/SHA1 hashes checked at all times for file integrity
- Timestamps preserved on files
- Partial syncs supported on a whole file basis
- Copy mode to just copy new/changed files
- Sync (one way) mode to make a directory identical
- Check mode to check for file hash equality
- Can sync to and from network, eg two different cloud accounts
- Encryption backend
- Cache backend
- Union backend
- Optional FUSE mount (rclone mount)
- Multi-threaded downloads to local disk
- Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna
- Experimental Web based GUI
Ultimately, I landed on a combination of BorgBackup, Rclone, and Wasabi cloud storage, and I couldn't be happier with my decision. Borg fits all my criteria and has a pretty healthy community of users and contributors. It offers deduplication and compression, and works great on PC, Mac, and Linux. I use Rclone to synchronize the backup repositories from the Borg host to S3-compatible storage on Wasabi. Any S3-compatible storage will work, but I chose Wasabi because its price can't be beat and it outperforms Amazon's S3. With this setup, I can restore files from the local Borg host or from Wasabi. //
Each machine has a backup.sh script (see below) that is kicked off by cron at regular intervals; it will make only one backup set per day, but it doesn't hurt to try a few times in the same day. The laptops are set to try every two hours, because there's no guarantee they will be on at a certain time, but it's very likely they'll be on during one of those times. //
I could skip the cron job and provide a relatively easy way for each user to trigger a backup using BorgWeb, but I really don't want anyone to have to remember to back things up. I tend to forget to click that backup button until I'm in dire need of a restoration (at which point it's way too late!).
The backup script I'm using came from the Borg quick start docs, plus I added a little check at the top to see if Borg is already running, which will exit the script if the previous backup run is still in progress. //
Restoring files is not as easy as it was with CrashPlan, but it is relatively straightforward. The fastest approach is to restore from the backup stored on the Borg backup server. Here are some example commands used to restore:
Recently, I’ve migrated my personal backup from Backblaze Backup to B2 (an online S3-style file storage, also by Backblaze).I’ve heard of Arq Backup for a few years now but had not tried it yet. Being a native macOS app, it has a very nice UI and meets all the requirements mentioned above.
For the record, I am writing down other softwares I have considered and why I did not use them. Note they are all server-oriented and need to be scheduled using cron or similar software. All of them also support encryption.
borg
A fork of attic, which was known as the holy grail of backups. It supports compression, block-based incremental and is open-source. However, the only remote backup it supports is SSH. Rsync.net provides an attic-specific package for $0.03/GB/month, which is considerably higher than B2’s $0.005/GB/month.
Everything
in Sync
Sync makes it easy to store, share and access your files from just about anywhere.
Best of all, Sync protects your privacy with end-to-end encryption — ensuring that your data in the cloud is safe, secure and 100% private.
Share any kind of file with anyone, quick and easy
With Sync you can send files of any size to anyone, even if they don't have a Sync account. Multiple users can work from the same set of folders, and features such as file requests, password protection, notifications, expiry dates and permissions ensure that you're always in control.
Open Standards
Common Sense
We give you an empty UNIX filesystem that you can access with any SSH tool
Our platform is built on ZFS which provides unparalleled data security and fault tolerance
rsync.net can scale to Petabyte size and Gigabit speeds
rsync / sftp / scp / borg / rclone / restic / git-annex
-
Secure Offsite Backup Since 2001
-
Five Global Locations in US, Europe and Asia
-
SSAE16, PCI and HIPAA Compliant - We Sign BAAs
-
No Contracts, Licenses, Setup or Per-Seat Costs
-
Unlimited Technical Support from UNIX Engineers
-
Free Monitoring and Configurable Alerts
-
Two Factor Auth available
-
Physical Data Delivery Available
-
Web Based Management Console
Cut your cloud storage costs by 80%
- 1/5 the cost of Amazon S3
- Free egress
- Faster than the competition
- Enterprise-class security
Wasabi Hot Cloud Storage is enterprise class, tier-free, instantly available and allows you to store an infinite amount of data affordably. Wasabi provides an S3-compliant interface to use with storage applications, gateways and other platforms.
Back up your Mac or PC just $6/month.
Unlimited Online Backup
Backblaze will automatically back up all your files including documents, photos, music and movies. Unlimited files. Unlimited file size. Unlimited speed.
You can download a free restore of 1 file or all your files anywhere in the world. There is an option to have a 256 GB flash drive ($99) FedEx to you or an external drive up to 8 TB ($189).
Access files on iPhone, iPad or Android.
Personal Key -- You can use a personal encryption key for additional security. If you lose your password, Backblaze will be unable to send it to you.
Encryption -- All your files are encrypted before being transmitted over SSL and stored encrypted.
Borg is a fantastic tool that covers the weaknesses of rsync without sacrificing much in terms of usability. In particular, you’ll be able to keep multiple backups, save space through deduplication and compression, and secure your data with either passwords or a keyfile.
As friendly of an online advertisement as you'll find.
In mid-August, the first commercially available ZFS cloud replication target became available at rsync.net. Who cares, right? As the service itself states, "If you're not sure what this means, our product is Not For You."
Of course, this product is for someone—and to those would-be users, this really will matter. Fully appreciating the new rsync.net (spoiler alert: it's pretty impressive!) means first having a grasp on basic data transfer technologies. And while ZFS replication techniques are burgeoning today, you must actually begin by examining the technology that ZFS is slowly supplanting. //
Yep—it took the same old 1.7 seconds for ZFS to re-sync, no matter whether we touched a 1GB file, touched an 8GB file, or even moved an 8GB file from one place to another. In the last test, that's almost three full orders of magnitude faster than rsync: 1.7 seconds versus 1,479.3 seconds. Poor rsync never stood a chance.
rsync has a lot of trouble with these. The tool can save you network bandwidth when synchronizing a huge file with only a few changes, but it can't save you disk bandwidth, since rsync needs to read through and tokenize the entire file on both ends before it can even begin moving data across the wire. This was enough to be painful, even on our little 8GB test file. On a two terabyte VM image, it turns into a complete non-starter. I can (and do!) sync a two terabyte VM image daily (across a 5mbps Internet connection) usually in well under an hour. Rsync would need about seven hours just to tokenize those files before it even began actually synchronizing them... and it would render the entire system practically unusable while it did, since it would be greedily reading from the disks at maximum speed in order to do so.
The moral of the story? Replication definitely matters.
Cloud Storage With ZFS
rsync.net supports ZFS send and receive over SSH
If you're not sure what this means, our product is Not For You.
A Natural Evolution
In 2012 rsync.net transitioned to ZFS as the base of it's cloud storage platform.[1]
In addition to enhanced data safety and resiliency[2], ZFS allowed us to offer snapshots of user accounts on any schedule.
The obvious next step was to offer ZFS send and receive, over SSH, to our platform.
[1] We run ZFS on FreeBSD
[2] Our conservatively sized raidz3 arrays have a 99.9999% resiliency.
We are the only cloud storage provider offering native zfs send and receive. A Special "zfs send Capable" Account is Required
Every account at rsync.net runs on our ZFS platform - but zfs send/recv requires special settings.
There is no difference in price - cost per Gigabyte/Month is the same - but there is a 1TB minimum.
You will control your own zpool and manage your own snapshots.
You will receive technical support from UNIX engineers for your use of zfs send and receive.
home in my setup is another hard drive.
start on mounted MOUNTPOINT=/home TYPE=ext4
# starts as non-root user
exec start-stop-daemon --start -c myuser --exec /home/myuser/btsync
exec is bash builtin
start-stop-daemon is installed package
Enter "visudo" and add the line:
username hostname=/usr/bin/mksysb
:wq to write and quit visudo
Now the user would enter the command "sudo mksysb" and it will prompt for the users password and log what has happened in the syslog.
Firefox Environment Backup Extension
FEBE 10.4 Released
FEBE allows you to quickly and easily backup your Firefox extensions. In fact, it goes beyond just backing up -- It will actually rebuild your extensions individually into installable .xpi files. Now you can easily synchronize your office and home browsers.
FEBE backs up and restores your extensions, themes, and (optionally) your bookmarks, preferences, cookies. and much more.
Backup as little or as much of your Firefox environment as you wish. Perform backups on demand or schedule daily, weekly, or monthly unattended runs. Sequential backups can be stored in timestamped directories so you can restore back as far as you like.
Any data backup regime or program should allow you to specify the basic concerns of
What to backup
Where to backup
When to backup.
How to backup
Here is a quick primer on the way to get FEBE up and running. These functions can be found in the FEBE options window. When running options, additional documentation for any item can be found by clicking the blue "i" icon: help icon
To access, click: Tools > FEBE > FEBE Options
Eric
February 12, 2019 at 9:35 am
Your backups must be tested
So you know they work as expected
Offline is best
So you can rest
When hackers strike unexpected
🗃 The open source self-hosted web archive. Takes browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more... - pirate/ArchiveBox
When running SpiderOak One from the command line, one available option is --purge-historical-versions. This removes some or all historical versions that were uploaded from the local computer, on a schedule you can specify. This can free up space in your account. This is a powerful feature, and care should be taken when using it since it permanently removes data and there is no undo.
Users seeking an easy method to remove a few historical versions might be more comfortable doing so via the graphical application.
As a local option, it it only purges historical versions from the device running the command. Attempting to run it on a device other than the one you are seated at with --device will fail with the error "Purging historical versions is only supported from the local device". To purge the historical versions of a different device, you will need to run this command on that device.
It is not possible to restrict the scope to a particular file or directory. It will operate on all of the files and directories that have been uploaded from the local computer, including versions now found in SpiderOak One's deleted items bin.
This is a one-off command that does not alter One's historical version retention policy moving forward. One will continue to retain all historical versions as before.
To run this command, first completely close SpiderOak One, and be sure that all SpiderOak One processes have closed correctly. Then:
On Windows
Open a command prompt window. Enter the following text into the window at the prompt, then press enter:
"C:\Program Files\SpiderOakONE\SpiderOakONE.exe" --purge-historical-versions --verbose
This command may take considerable time before it generates text, so please make sure to not close the program or reopen SpiderOak before it has completed. For this reason we recommend using it in conjunction with --verbose as shown above, which makes the output less laconic.
This option has three modes:
-
no argument Use the default schedule, which is to keep one version per hour for the last 24 hours, then one version per day for 30 days, then one version per week thereafter.
-
all Purge all historical versions, keeping only the most recent version of each backed up file.
-
specifier Purge according to a schedule you specify. The specifier for setting your own schedule is an argument of the form hM,dN,y where M and N are numbers, specifying how many hourly and daily versions to keep, respectively. Leaving undefined a particular value (as for the "y" or yearly part of this example) means unlimited for that value.
-
--purge-historical-versions: Keep one hourly version for 24 hours. Following that, keep one daily version for 30 days. Following that, keep one weekly version thereafter. This is the default schedule.
-
--purge-historical-versions d60,y: Keep one daily version for 60 days. Following that, keep one yearly version thereafter.
-
--purge-historical-versions d: Keep one daily version.
-
--purge-historical-versions all: Eliminate all historical versions.
Note that when you use more than one specifier, each subsequent one one begins after the completion of the previous one. This is a common source of confusion.
To purge only historical versions which are newer than a specific date, you can simply leave off any older qualifier. For example:
-
--purge-historical-versions d7: Keep one version a day for seven days, but do not purge versions older than one week.
-
--purge-historical-versions d7,y6: Keep one version a day for seven days and one version a year for six years, but do not purge versions older than six years.