The process of files being damaged due to some hardware or software failure is known as data corruption and this is one of the main problems which web hosting companies face since the larger a hard disk is and the more information is placed on it, the much more likely it is for data to get corrupted. You will find several fail-safes, yet often the data gets corrupted silently, so neither the particular file system, nor the administrators notice anything. Consequently, a corrupted file will be treated as a standard one and if the hard disk drive is a part of a RAID, the file will be duplicated on all other disk drives. In principle, this is for redundancy, but in reality the damage will get worse. When some file gets corrupted, it will be partly or fully unreadable, so a text file will no longer be readable, an image file will display a random combination of colors if it opens at all and an archive will be impossible to unpack, so you risk losing your site content. Although the most frequently used server file systems feature various checks, they frequently fail to identify a problem early enough or require a long amount of time in order to check all files and the web server will not be operational for the time being.

No Data Corruption & Data Integrity in Hosting

In case you host your Internet sites in a hosting account with our firm, you will not need to worry about your data ever getting damaged. We can ensure that since our cloud hosting platform works with the state-of-the-art ZFS file system. The latter is the only file system which uses checksums, or unique digital fingerprints, for every single file. All the data that you upload will be saved in a RAID i.e. simultaneously on multiple NVMes. All the file systems synchronize the files between the different drives with this kind of a setup, but there's no real warranty that a file won't be corrupted. This can happen throughout the writing process on each drive and after that a corrupted copy may be copied on the other drives. What makes the difference on our platform is the fact that ZFS analyzes the checksums of all files on all of the drives live and in the event that a corrupted file is discovered, it's swapped with a good copy with the correct checksum from some other drive. By doing this, your data will remain unharmed no matter what, even if an entire drive fails.

No Data Corruption & Data Integrity in Semi-dedicated Servers

You won't need to deal with any kind of silent data corruption issues whatsoever in case you purchase one of our semi-dedicated server packages as the ZFS file system that we work with on our cloud hosting platform uses checksums in order to make sure that all the files are intact at all times. A checksum is a unique digital fingerprint that is allotted to each and every file stored on a server. Due to the fact that we store all content on multiple drives at the same time, the same file uses the same checksum on all the drives and what ZFS does is that it compares the checksums between the different drives in real time. If it detects that a file is corrupted and its checksum is different from what it should be, it replaces that file with a healthy copy right away, avoiding any possibility of the corrupted copy to be synchronized on the remaining hard drives. ZFS is the sole file system out there which uses checksums, which makes it far superior to other file systems which cannot identify silent data corruption and duplicate bad files across hard drives.