04 April 2018 Reading time: 9 minutes

How we developed our backup solution. Part two

Alexander Bryukhanov

Alexander Bryukhanov

The chief technical officer of ISPsystem

ISPSystem
This is a continuation of the story about how ISPsystem developed a backup solution. Told by Alexander Bryukhanov, the chief technical officer of ISPsystem. Read the first part here.
Perfect is the enemy of good
Developing a backup or installing and configuring software has always been a tricky task for us. When fetching from repositories, you cannot be confident of the results. Even if everything has been done perfectly well, the engineers will break something sooner or later. As for the backup copies, people normally call for them only when problems arise, and if something is not going as they expected … well, you understand.
There are actually quite a few approaches to backing up but all of them pursue one goal: to make the backup as fast as possible and at the same time as cheap as it can only be.

Trying to please everyone

2011. It’s been more than a year since the death of total server backups. Of course, backup copies of virtual servers were made, they still are. For example, at WHD.moscow someone told me about a really simple way to backup virtual servers through live migration. And still, it does not happen as often as it was 10–15 years ago.
We started developing the fifth version of our products based on our framework where we implemented the system of events and internal calls.
We decided to implement a truly flexible and universal approach to setting up backups so that users could adjust time, choose the type and content of backup copies, and decompose them into various repositories. And we also planned to extend this solution to several products.
Backups can be made for different goals. Someone makes backup copies to protect against equipment malfunctions, others want to guard against data loss due to administrator mistakes. And we were so naive trying to please everyone.
That is how our attempt to make a flexible system looked like:

We added user storages. In this case we have to keep ready archives in two separate place. However, here you have a problem: If for some reason it would be impossible to upload an archive to one of the storages, can you consider such backup as completed?

We add archive encryption. It is simple until you realize what happens if a user changes the password.

What am I trying to say? This insane flexibility has given rise to a huge number of usage scenarios, and it was almost impossible to test them all. Therefore, we decided to follow the path of simplification. Why ask the user if he/she wants to save metadata if they occupy several kilobytes. Or, for example, do they really care what kind of archiver we use?
Another funny mistake: there was a user who limited the backup time from 4:00 to 8:00. The problem was that the process itself was scheduled to run at 3:00 daily (standard setting @daily). The process started and determined that at this time it was forbidden to run, and just finished. No backup copies were made, of course.

Reinventing the wheel  —  dar

In the middle of the 2010s, clusters and clouds began to become popular. There was a trend for “Let’s manage not one server but a group of them and call it a cloud” :). And it affected our ISPmanager.
And, since we had many servers now, the idea to compress data on a separate server reappeared. Like many years ago, we tried to find a ready solution. We found bacula to be alive but still complicated. To manage it, we probably would have to write a separate panel. And then I came across dar that helped to implement many ideas for ispbackup. It seemed to be an ideal solution that will allow us to manage the backup process as we wanted.
In 2014, we wrote the new solution using dar. But there were two serious problems. First, the dar archives could only be unpacked by the original archiver (i.e., dar itself). Second, dar formed listing of files in XML format.

Thanks to this utility we learned that it is impossible to return memory to the system without completing the process when allocating memory in C in small blocks (Centos 7 limited it to less than 120 byte blocks).

But otherwise, I really liked it. Therefore, in 2015 we decided to reinvent the wheel and write our own isptar. As you may guess, we chose tar.gz format because it could be quite easily implemented. I had all sorts of PAX headers figured out when I wrote ispbackup.
I must say that there was not enough documentation related to this subject. Therefore, in due time, I had to spend time learning how tar works with long file names and large sizes. These restrictions were originally laid out in the tar format. 100 bytes for the length of the file name, 155 for the directory, 12 bytes for the decimal file size, etc. Well, yes, 640 kilobytes is enough for everyone! Ha!
There were several problems to solve. The first is to quickly get file listing without having to unpack the archive completely. The second is the ability to extract an arbitrary file without full decompression. The third is to leave tgz format which can be deployed with any archiver. And we solved each of these problems!

How do I start unpacking an archive from a specific offset?

It turns out that gz flows can be put together! A simple script will prove this to you:

cat 1.gz 2.gz | gunzip -
You get contents of the files put together without any errors. If each file is written as if it is a separate thread, then the problem will be solved. Of course, this lowers the compression ratio but slightly.
Getting listing is even easier. Let’s put listing at the end of the archive as a normal file and write file offsets in the archive (by the way, dar also keeps listing at the end of the archive).
Why put it at the end? When you make a backup of hundred gigabytes, you may not have enough space to store the entire archive. Therefore, as you create, you store it in parts. The great thing is that if you need to get one file, you only need listing and the part that contains the data.
There was only one problem: how to get the offset of listing itself? For this, I put service information about the archive at the end of listing, including the packed size of the listing itself. I also put the packed size of the service information itself (just two digits) at the end of the service information as a separate gz thread. In order to get listing quickly, you only need to read a few last bytes and unpack them. Then you need to read service information (we already know the offset against the end of the file) and listing itself (its offset has been taken from service information).
A simple example of listing. Different colors are used to separate gz flows. We start with unpacking the red one (just by analyzing the last 20–40 bytes). Then unpack 68 bytes containing packed short information (highlighted in blue). And, finally, unpack another 6247 bytes to read the listing the real size of which is 33,522 bytes.

etc/.billmgr-backup root#0 root#0 488 dir
etc/.billmgr-backup/.backups_cleancache root#0 root#0 420 file 1487234390 0
etc/.billmgr-backup/.backups_imported root#0 root#0 420 file 1488512406 92 0:1:165:0
etc/.billmgr-backup/backups root#0 root#0 488 dir
etc/.billmgr-backup/plans root#0 root#0 488 dir

…

listing_header=512
listing_real_size=33522
listing_size=6247
header_size=68

It sounds a little bit confusing, I even had to look into the source file to remember how I do it. You can also look at the source of isptar, which, like ispbackup, is available on GitHub.