I regularly transfer large files between two servers and need their directories to be synchronized. Initially, I used a bash script with scp
to transfer the files after collection on the source server. However, this method often left the directories out of sync while moving about 300GB of data.
Recently, a friend introduced me to sshfs
, so I decided to try it.
SSHFS
Setting up sshfs
was straightforward. I installed it on the destination server and ran the following command:
sshfs -o allow_other,default_permissions,uid=911,gid=911,umask=0000 source-server:/archives/ /backups/
I included the uid
, gid
, and umask
options to ensure the files were readable by the applications on the destination server. Otherwise, the files would be owned by root
, and Iād have to change the ownership manually.
Here are some basic benchmarks:
Creating a 1GB file on the destination server:
$ dd if=/dev/zero of=testfile bs=1G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.854064 s, 1.3 GB/s
Copying it to the mounted directory:
time dd if=testfile of=/backups/testfile bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 11.6881 s, 91.9 MB/s
real 0m11.773s
user 0m0.011s
sys 0m0.519s
The transfer rate of 91.9MB/s was decent, but not ideal for my needs. I then tested reading from the mounted directory:
time dd if=/backups/testfile of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 59.2111 s, 18.1 MB/s
real 0m59.401s
user 0m0.027s
sys 0m1.380s
This result was disappointing, especially since both machines have a 1Gbps symmetrical connection.
Rclone Mount
My first attempt with rclone
looked like this:
rclone mount source-server:/archives/ /backups-rclone/ --allow-other --uid 911 --gid 911 --umask 0000 --default-permissions
The initial transfer speed was just 2MB/s. After researching optimizations, I implemented the following:
- Enabled caching with
--vfs-cache-mode writes
, which uses write-back caching to improve performance. - Increased
--buffer-size
to 64M. - Enabled multi-threading with
--multi-thread-streams 4 --multi-thread-cutoff 250M
.
rclone mount source-server:/archives/ /backups-rclone/ --allow-other --uid 911 --gid 911 --umask 0000 --default-permissions --vfs-cache-mode writes --buffer-size 64M --multi-thread-streams 4 --multi-thread-cutoff 250M
The performance significantly improved:
time dd if=testfile of=/backups-rclone/testfile bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.20803 s, 889 MB/s
real 0m1.233s
user 0m0.031s
sys 0m0.343s
Reading from the mounted directory also showed improvement:
time dd if=/backups-rclone/testfile of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.09872 s, 977 MB/s
real 0m1.217s
user 0m0.016s
sys 0m0.426s
The reported transfer rate of 977MB/s, equivalent to 7816Mbps on a 1Gbps network, was suspicious to say the least, likely due to the VFS cache. A subsequent test with a different file showed a more reasonable speed:
time dd if=/backups/testfile2 of=/dev/null bs=1M
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 15.5331 s, 69.1 MB/s
real 0m15.605s
user 0m0.012s
sys 0m0.789s
This speed, although below the 1Gbps capacity, was almost four times better than sshfs
.
Conclusion
It appears that sshfs
is deprecated, as noted in the README:
SSHFS is shipped by all major Linux distributions and has been in production use across a wide range of systems for many years. However, at present SSHFS does not have any active, regular contributors, and there are a number of known issues (see the bugtracker).
I plan to stick with rclone
for now. If you have any suggestions for further performance improvements, feel free to reach out to me at hi @ this domain.