I was playing around earlier today, trying to find the slickest one-line command that would back up my home directory on one server to a tarball on another. (If I didn’t care about making the result a tarball, rsync would be the obvious choice.) I started to wonder whether it was possible to run tar on the local machine, but pipe its output via SSH to a remote machine, so the output file would be written there.
As is so often the case with anything Unix-related, yes, it can be done, and yes, somebody’s already figured out how to do it. The command given there is designed to copy a whole directory from one place to another, decompressing it on the receiving end (not a bad way to copy a directory if you don’t have access to rsync):
tar -zcf - . | ssh name@host "tar -zvxf - -C <destination directory>"
Alternately, if you want to do the compression with SSH instead of tar, or if you have ‘ssh’ aliased to ‘ssh -C’ to enable compression by default:
tar -cf - . | ssh -C name@host "tar -vxf - -C <dest dir>"
But in my case I didn’t want the directory to be re-inflated at the remote end. I just wanted the tarball to be written to disk. So instead, I just used:
tar -zcf - . | ssh name@host "cat > outfile.tgz"
There are probably a hundred other ways to do this (e.g. various netcat hacks), but this way seemed simple, secure, and effective. Moreover, it’s a good example of SSH’s usefulness beyond simply being a glorified Telnet replacement for secure remote interactive sessions.
0 Comments, 0 Trackbacks