No Data Corruption & Data Integrity
See what No Data Corruption & Data Integrity is and how it can be good for the files within your website hosting account.
The process of files getting corrupted resulting from some hardware or software failure is called data corruption and this is among the main problems that web hosting companies face as the larger a hard drive is and the more info is filed on it, the much more likely it is for data to be corrupted. There are a couple of fail-safes, yet often the info gets damaged silently, so neither the file system, nor the administrators see anything. As a result, a bad file will be treated as a good one and if the hard disk drive is a part of a RAID, that particular file will be duplicated on all other drives. In principle, this is for redundancy, but in reality the damage will be even worse. The moment a given file gets damaged, it will be partially or fully unreadable, which means that a text file will no longer be readable, an image file will display a random blend of colors if it opens at all and an archive will be impossible to unpack, and you risk losing your content. Although the most commonly used server file systems feature various checks, they frequently fail to discover a problem early enough or require an extensive period of time to check all files and the server will not be operational for the time being.
No Data Corruption & Data Integrity in Semi-dedicated Hosting
You will not encounter any silent data corruption issues if you acquire one of our semi-dedicated hosting plans since the ZFS file system that we work with on our cloud hosting platform uses checksums in order to guarantee that all files are intact all the time. A checksum is a unique digital fingerprint that is assigned to each and every file stored on a server. Because we store all content on multiple drives at the same time, the same file has the same checksum on all the drives and what ZFS does is that it compares the checksums between the different drives in real time. If it detects that a file is corrupted and its checksum is different from what it has to be, it replaces that file with a healthy copy without delay, avoiding any chance of the bad copy to be synchronized on the other drives. ZFS is the only file system you can find that uses checksums, which makes it much more dependable than other file systems which cannot identify silent data corruption and duplicate bad files across hard drives.