dd and rsync - Data migration to the cloud with ddtransfer.sh

Leave your reply

Data migration into the cloud or the simple backup of a very large storage medium with the help of dd and rsync can be a challenge for some system administrators. So I would like to show them how to migrate huge hard disk images to the IaaS cloud with 'ddtransfer.sh'.

introduction

In the daily work of an administrator or even a PC user, the task of transferring the contents of an entire hard disk arises from time to time. This can happen, for example, to make a backup, to put a new hard disk into operation or to transfer the contents of a storage medium byte by byte into the cloud.

Commercial hard disks now have a capacity of ten terabytes and more. This data volume is not easy to transport over the existing bandwidth of the usual Internet connections. It can even be a very tedious task for local networks. From this problem the question matured, why there is not long a tool, which helps here optimally.

This article first briefly describes conventional ways and means to solve the problem. The requirements and the partial steps for the solution of the problem, which is to be offered with the program ddtransfer.sh, are derived from the indication of the restrictions of these methods. Finally, there is an outlook for the further development of the program.

Conventional possibilities

Copying, transferring and importing the data from a USB stick with a size of 16 GB, for example, takes 67 minutes during normal working hours in Berlin via a connection of 100 Mbit/s, which is shared by more than 120 employees at the same time.

dd -> Copy -> dd

The individual steps were as follows:

1) Creating the image

root@ddtransfer:~# time dd if=/dev/sdc bs=1G iflag=fullblock of=sdc.img

Duration: 12 minutes, 25 seconds

2) Transfer

root@ddtransfer:~# time rsync --compress --compress-level=9 sdc.img root@158.222.102.233:/mnt/1/.

Duration: 51 minutes, 10 seconds

3) Importing the data into a storage volume

root@las-transfer2:/mnt/1# time dd if=sdc.img of=/dev/vdd bs=1G iflag=fullblock

CloneZilla
I aborted a test with CloneZilla as a counter test because I would have had to activate password authentication, which seems outdated to me. I don't want to give cause to undermine basic security measures by, as I have often experienced, simply forgetting such workarounds and hacking computers in the end - especially when simple ad-hoc passwords are used.

bbcp
With Bar Bar Copy (bbcp) the transfer of large files can be accelerated considerably, but the data is transferred unencrypted. The other problem is that image files have to be created in full size, which can then be copied. However, if images are created beforehand, a large amount of memory must be made available in advance (even despite compression). The transfer script, on the other hand, requires only 10 GB of free space on the hard disk, although more space, e.g. 100 GB, is definitely an advantage.

Mailing
Another way, which is offered to customers for the transmission of large data quantities, is the dispatch of hard disks or the recording of a server (starting from approx. 10 TB), which are then sent to the Provider. Beyond the 10 TB this variant may still be the more practicable one, but with the further technical development it is quite foreseeable that ddtransfer.sh is also a viable way for these data volumes.

Acronis

The detour via an online backup is also very effective. I tested this with Acronis via the provider STRATO.

  • Advantages: Deduplication and compression of the backup.
  • Disadvantages: Client must be installed separately, raw devices cannot be imported directly but only via a boot CD and an account with the provider must be available.

Conclusion: The speed of this method is definitely impressive, which is very much in favor of this solution, but it requires the account and the installation.

Requirements for a new program

With an image size of 15 GB, the procedure described above - dd to file, copying over the network, dd to a device - is completely normal and appropriate. The actual problems only become apparent when it is

  • considerably larger amounts of data are involved,
  • that need to be transferred over a low bandwidth, and
  • the question of efficient verification of data integrity also arises.

The duration of the transmission can quickly take several days. If a failure occurs during this time, the entire process must be restarted. The less data that needs to be retransmitted, the less time is lost.

If the transfer of a partial step has been completed, it is possible to determine whether a bit flip or other changes have occurred in the data stock by creating checksums at the very beginning and following the import of the files. With the previous procedures, this check can only be carried out as an overall check. If a change is detected now, this is all the more annoying at the end of a very long process.

The low bandwidth of a normal Internet connection is the part that takes care of most of the processing time. However, very few programs are able to take full advantage of it. An effective way to make even greater use of bandwidth is to call parallel connections. The respective connections are by no means processed at the same speed. It is not uncommon that of 16 connections that were started one after the other at intervals of a few seconds, the tenth transfer is completed first.

The script ddtransfer.sh should close exactly these gaps.

  • The data to be transferred is divided into blocks, which are called to copy 'dd' several times in parallel if necessary.
  • A checksum is formed for each block during its creation.
  • Depending on the bandwidth and number of hops to the target computer, many parallel connections are used for sending the blocks.
  • While there are further blocks in the transmission, those that have already arrived on the target system are written to the target device, again several times in parallel if necessary.
  • After each transmission has been written to the target device, a checksum of the finished block is generated again.
  • At the end the checksums are compared. If there are differences, the affected block is transferred again and calculated.
  • By precisely logging the individual steps, a transfer can be resumed after an interruption at the block where the interruption occurred.

As far as I know, there is currently no available tool for transferring large Devices that meets these conditions. The program is based on 'dd', which belongs to the so-called Core Utils. So it should run on every Linux that offers a shell, also 'rsync' and the 'ssh agent' should be present. Thus it can be used with a common live Linux, for example to transfer Windows installations. It can also be used on small hardware like the Raspberry Pi, which can be used as a transfer station, for example.

The real crux of the whole procedure is the Internet connection. The application of the script does not necessarily make sense in local networks. Under the condition of a sufficiently large bandwidth, the time for the entire transmission could even be extended. However, if the amount of data is very high and the checksum calculation is important, the program should also be useful in the LAN. The difference according to the original intention of the program is brief:

  • Local: Fast reading of the data in a few processes
  • Internet: Slow transfer in many processes
  • Remote: Fast writing of the data into the respective device with few processes

Accordingly, there are four main functions in the script:

  • CreateImageFiles: Create the parts of the image as single files and form the incoming checksum.
  • SendFiles: Transfer the files with Rsync.
  • ImageToTargetDevice: Import the files and then create the initial checksum from the imported data.
  • The ShowProceeding function is used for monitoring and final processing, i.e. the checksums are compared and individual parts are created, transferred and imported if necessary.

The other functions:

  • dd_command,
  • RemoteWorkspace,
  • Transfer. RemoteStatusImageToTargetDevice
  • RemoteStatusImageToTargetDevice and
  • ReImage

Each represent partial steps and are called by the main functions.

Resumption after interruption

When the target computer restarts, the script continues to run without interruption. If the source computer fails or the target computer fails for a longer time, the program can be called again. As option the name of a report file must be mentioned, in this the options of the previous call are located. Then the status of the data transfer and the created blocks is checked. Where necessary, interrupted subprocesses are restarted and the entire process is continued. The suffixes of the file names can also be used to determine how far processing has come:

  • 'run' for still to be processed,
  • 'transfer' for ready for transmission,
  • 'ongoing' for being in transmission and
  • 'ToDev' for a file that is already written remotely to the target device.

The main means for continuing the program is the report file, which is read and then updated. On the other hand, started transfers are initiated again. A continuation of started Rsync calls is technically possible, but would have required the rewriting of some functions, which seemed to be too expensive under the given circumstances, because the development of the script was already too far advanced.

problem

Disk space
There is always the possibility that the disk is full locally or remotely. This means that some temporary space is needed to buffer the data blocks. Continuous monitoring of the space still available and calculation of the expected use of new block files that are currently being transferred counteracts fullness. The compression of the data is given as an option to the Rsync calls.

For example, it took about 19 hours to transfer one terabyte of data from my office on weekdays and about 16 hours on weekends. The difference in the available bandwidth was also noticeable from the early evening onwards in that the temporary disk space was initially fully used locally, later, with more free bandwidth, considerably more is needed on the remote station.

Bandwidth
If the program is called with a lot of connections, the throughput slows down for all others who use the same network. However, this also means, as already described, that the transmission at night or on weekends speeds up the process considerably.

How dd works
dd' can only ever read or write. Because of this it is advantageous to process reasonably large blocks in one step. At the same time this program is able to address specific positions of a file or a block device blockwise or bytewise. This makes it the only tool that can be used for our purpose. The exact determination of the position is essential for the division of the data to be transferred into clearly defined individual steps. The block and file size in ddtransfer.sh is always a multiple of the minimum block size of the file system of 512 bytes. So far I have only allowed one dd process per processor core (if there is only one core, there are two).

The procedure mentioned at the beginning of this section makes it a bit cumbersome to determine whether 'dd' will still write to a file, or whether the respective process has already been completed. While 'dd' is still reading from the device - the larger the selected read block size, the longer the read process - the target file does not change and is not considered to be open, so it cannot be checked by 'lsof'. Based on the size of a file, this may have to be calculated in advance and then checked continuously. During the development of the script, it therefore proved to be useful to restart started write operations completely after an interruption. The option 'status=progress' is offered in new implementations of 'dd', but is not always available and I haven't yet had time to check if it can be used. Furthermore, a loss of time is compensated by the transfer time of the files when a dd call is restarted.

ssh and rsync
Frequent ssh calls are necessary to execute the remote commands and control the processes. It turned out that the number of possible connections can also be a scarce resource. In order to avoid bottlenecks, various calls were combined to make the script more efficient. For further development, it would make sense to optimize this even more. Two new ssh connections are opened for each start of an rsync process. If one of the instances involved reboots, the rsync processes get stuck and interfere with subsequent calls. It remains to be seen which solution is most appropriate here. So far I have helped myself with restarting the VMs involved.

invocation

The script should be called directly from a root shell. An ssh agent is also required for loading a private ssh key. The remote login must also be possible as 'root' according to the current status.

An example call would be :

./ddtransfer.sh --local_device /dev/sdc --remote_device /dev/vdd --TargetHost 46.16.76.151 --remote_transfer_dir /mnt --keep_logs

In this case

  • /dev/sdc' is the local device to transfer,
  • /dev/vdd' is the storage volume of the virtual machine to which the data is written,
  • 46.16.76.151' is the IP of the target VM, and
  • /mnt' denotes the directory where the image files are cached remotely. Such a directory can also be specified locally.
  • The '-keep_logs' command causes all log files to be retained, even if the process has been successfully completed.

The command

./ddtransfer.sh –restart report_ddt_15337519335231.log

ensures that a previously interrupted transfer is resumed. The file report_ddt_15337519335231.log (example) is created when a transfer is called for the first time.

Completing the transfer with ddtransfer.sh

At the end, the report file is processed and checked whether the start and end checksums match or whether the latter are even missing. The absence of checksums, which are calculated after each block has been written to the target device, can occur if the connection between the source and target host has been broken. If the sum total is missing or different, it is recalculated. If it still differs, the block is completely read out again, transferred and recalculated. The dd calls are logged completely, so that they can be repeated easily. If the checksum is still not correct, there is a corresponding error message.

In case of error messages, the log files are retained. After checking the facts, the individual steps can be restarted manually if necessary.

Outlook

For future development, it must be further investigated how the individual program steps behave with regard to multiple calls to each other and to the other parts. Hopefully, this would improve the balancing and speed up the overall processing.

Apart from that on the agenda:

  • No need to transfer blocks that have the same checksum as previous blocks,
  • a better status display,
  • the dd option conf=noerror,sync
  • Complete clearing of hanging rsync and ssh processes in case of a new call of ddtransfer.sh without having to restart the start and target computers,
  • continue writing started rsync jobs,
  • the expansion of the status file, among other things to improve the resumption of the transfer, and
  • shorter file names in the temporary directories


The program code can be found at devops.profitbricks.com.