====== CloudBacko Upgrade Advisory (#34490) - Data corruption issue on any v5.1.4.0 to v5.1.4.21 backup types ======
**Posted:** 2022-06-02
**Revised:** 2022-07-11
We have recently identified and confirmed a critical bug found in CloudBacko Pro / Lite / Home (v5.1.4.0 to v5.1.4.21) that affects all types of Backup Sets, where source backup data is >32MB, resulting in unrestorable v5.1.4.x data.
Not affected is restoring data from time period prior to v5.1.4.0 backup dates (ie v5.1.0.0 data is not affected).
* **UPDATE: On 2022-July-11 CloudBacko v5.3.2.0 is released; upgrade to this latest release to fix the issue.**
===== What does this mean? =====
* Affecting CloudBacko v5.1.4.0 up to v5.1.4.21 , for **ANY Backup Set types where source files have raw size greater-than 32MB**, and Deduplication = OFF.
* Example, "tasks.zip" is 40MB, is affected.
* Example, "mylist.txt" is 2MB, is not affected.
===== What happened? =====
* This happens in lookup (find duplicated data) logic in backup of all Backup Set types, when Deduplication is OFF.
* v5.1.4.0 to v5.1.4.21 will always find duplicated data from index even Deduplication is OFF. The lookup result will be wrong, hence, the file data is already corrupt.
===== What can the user expect? =====
* Unable to restore due to corrupt backup data
===== What are the affected CloudBacko versions? =====
* CloudBacko v5.1.4.0 up to v5.1.4.21 , where **Deduplication = OFF** at any point during backup runs within these versions
* CloudBacko v5.1.4.0 up to v5.1.4.21 , where **Deduplication = ON and Migrate Data = ENABLED** (checkbox marked)
* CloudBacko v5.1.4.0 up to v5.1.4.21 , where **Deduplication = ON and Migrate Data = DISABLED** (checkbox unmarked)
* [Updated: 2022-06-22] Additional analysis has shown that Backup Sets where **Deduplication = ON (Enabled)** , are __not affected__ by this corruption.
===== What is the default Deduplication settings? =====
* By default, Deduplication is ON (enabled), both module and Backup Set.
* It will be OFF (disabled) if Deduplication toggle is OFF manually per Backup Set
* Above are sample default behavior, as there can be other contributing factors that may affect settings, you should review your settings to verify behavior in your environment.
===== What if the user's Source data is less-than 32MB? =====
* It is not affected. These source data isn't chunked for Deduplication.
===== What if the user's Source data to backup is a mix of greater-than and less-than 32MB? =====
* It depends on the conditions stated above.
===== Is there workaround? =====
* No workaround. Data must be backup again after fix is applied.
===== What action do I need to take to fix this problem? =====
* **Take immediate action** to download and install latest **hotfix v5.1.4.22 (or higher)** via CloudBacko website (https://www.cloudbacko.com/cloudbacko-support).
* **UPDATE: On 2022-July-11 we released CloudBacko v5.3.2.0; it is recommended to upgrade to this latest release as it includes prior hotfixes. Hotfixes for v5.1.4.x is no longer available.**
* Once required minimum hotfix version is applied, as a one-time routine the next backup job will automatically remove Blocks (BAK) from the Destination containing backup files that reference invalid checksum file chunks, then reupload source backup file that was corrupted and correctly update the migrate status table. A flag will be set in the Index to denote fix was applied ( likewise reverting the Index to an older copy will retrigger the fix ). [Shared Blocks (BAK), a block containing multiple small source files (valid and corrupt), won't be purged until all contents of shared block has cycled through changes and surpass Retention period.]
===== What if my maintenance has already expired? How do I upgrade? =====
* **Stop! Do not upgrade** until you contact a member our [[https://www.cloudbacko.com/support|Support team]] for assistance with your License upgrade.