Performance

Performanceis important. Performance depends on the resources (processor, memory, disks,network) available and how the software leverages them.

Specifically:

  • how fast the client can identify changes. A client with a SSD can identify changes at around 300MB/s and up, even with Bitlocker encryption used.
  • how fast the client can transmit changes to the server. With slower LANs and WANs GZip compression is used, typically resulting in 30-40% of data actually transmitted. With Gigabit Ethernet GZip is just too slow, transmitting the data uncompressed is faster than using GZip. The best solution I found so far is to use LZ4 with gigabit networks, as LZ4 is much faster, but at lower compression quality (50-70% of original size). The benefit of using LZ4 is that less data needs to be encrypted, transmitted, and decrypted, and thus using LZ4 is faster than no compression at all.
  • how fast the server can persist the changes to a disk. This is typically around 80MB/s (with Bitlocker) not considering volume shadow copies. With volume shadow copies and depending on the history of your disk, the server first has to copy all data overwritten by new data, essentially tripling the I/O rate or reducing throughput to something like 25MB/s. In Lindenberg Software Backup 1.1 on Windows 8 and up or Windows Server 2012 and up, Lindenberg Software Backup uses dynamic and differencing virtual disks, which eliminate the need for shadow copies during backup. However merging virtual disks needs to be done occassionally, which also involves copying, only at a different point in time. In essence NTFS is not the ideal target for large scale backups. ReFS would be more adequate, but I haven´t seen the versioning approach yet. Or Btrfs with snapshots on Linux — the server part is highly portable. I am still experimenting with different variants of IO, including unbuffered, writethrough, etc. — in case you also want to benchmark variants on your system please let me know.

The following table gives some typical network speeds and the resulting transfer rates, both using typical compression and uncompressed. Note that any protocol adds some additional overhead. The same is true for file systems like NTFS or REFS and volume encryption like BitLocker or VeraCrypt.
Network Speed 1 MBit/s 10 MBit/s 40 MBit/s 100 MBit/s 1 GBit/s 100MB/s 300MB/s
Typical Network Typical ADSL Upload Typical VDSL or Cable Upload, Typical ADSL Download Typical VDSL Download, Maximum VDSL Upload Maximum VDSL Download, Typical Cable Download, Fast Ethernet, Typical Wireless Network Gigabit Ethernet Fast SATA Hard Disk Typical SATA SSD
Compression (Rate) noneGZip (60%) noneGZip (60%) noneGZip (60%) noneGZip (60%) noneLZ4 (40%) none none
MB/s 0.10.3 1.33.1 5.012.5 12.531.1 125188 100 300
GB/h 0.451.1 4.511.3 1845 45112 450675 3601080
GB/d 10.827 108270 4321080 10802700 1080016200 864025920
In other words, the capability to do initial backups heavily depends on the network speed. When using Gigabit Ethernet however, the speed of the disks becomes a relevant factor. Also the speed of the disk to be backed up is a significant factor for incremental backups.

Consider the following scenarios:

Initial backup in LAN, client uses SSD or non-ancient hardware, server hard disk Typically the server hard disk is the bottleneck around 80MB/s (with Bitlocker but no VSS) or 25MB/s (with VSS turned on). Version 1.0 on any Windows or Version 1.1 on Windows 7, Windows Server 2008 R2, or Windows Home Server 2011: Disabling VSS is a bad idea as you want to be able to access older backups. For new backups, VSS typically does not need to copy old data. Deleting images is therefore discouraged if you need to redo the backup. Need more investigation.
Version 1.1 on Windows 8 and up or Windows Server 2012 and up: Turn off VSS as dynamic and differencing vhd(x) do not benefit from shadow copies.
Initial backup in LAN, client uses ancient hardware With old hardware the client disk or processor may also become the bottleneck Should work anyway, but continuing to work in parallel may be impossible
Initial backup via WAN With todays internet (upload) bandwidth a no-go. A 240GB disk will take more than a day with 10MBit/s upload, more than ten days at 1MBit/s. Don´t do it! Good news: Backup is pretty reliable, I observed backups of 15GB spanning more than 24 hours and even surviving assignment of a new IP address on the server side.
Incremental backup via Gigabit Depending on amount of changes identified either SSD is bottleneck (low change rate) or server hard disk (high change rate) I am planning to look into how to improve change detection even further.
Incremental backup via LAN or WAN Usually the network is the bottleneck. Depending on change rate can be too much for WAN. Options to address this organizational:
  • more bandwidth (upload) - always a good idea
  • install software or patches only when you can do a backup via LAN afterwards
  • separate system and data to different drives or volumes (not yet supported in user interface)