Backing up mysql/mariadb over ssh with multithreaded compression - note that this requires pbzip2 for multithreaded performance - highly recommended. Alternatively you can replace pbzip2 with gzip -9 (or a lower number for less cpu usage)
mysqldump -uUSERNAME -p DATABASE | pbzip2 | ssh USERNAME@EXAMPLE.COM 'cat > /path/to/DATABASE.sql.gz'
Local restore of pbzip2 would be:
pbzip2 -cdk DATABASE.sql.gz | mysql -u root -p DATABASE
unbuffer ansible-playbook -i hostsfile -l limitgroup fancyplaybook.yml > out.txt
ls | while read filename; do tar -czvf "$filename".tar.gz "$filename" && rm "$filename"; done
dd if=image.img of=/dev/sdb bs=4M status=progress iflag=nocache oflag=nocache,dsync
#!/bin/bash
SSHUSER=username
SSHHOST=host
SSHPORT=port the remote host will use on localhost
sleep 100
AUTOSSH_GATETIME=30
export AUTOSSH_GATETIME
while true
do
/usr/bin/autossh -M 10994 -N -R $SSHPORT:localhost:22 $SSHUSER@$SSHHOST -o "ServerAliveInterval 45" -o "ServerAliveCountMax 2"
sleep 5
done
Then
* Set script to executable
* Make sure the autossh package is installed
* ssh-copy-id to the target host (run ssh-keygen first if needed)
* _Test the connection with the appropriate user_
* Set the script to autostart, for example in /etc/crontab like: @reboot username /home/username/autossh.sh
for i in 1 2 3 4 ; do nice -n 20 openssl speed >/dev/null 2>&1 & done
vcgencmd measure_temp