|Release Date:||24 Jan, 2023|
|Build Date:|| 23 Dec, 2022
|Linux Manager/Agent Build Version:|| 6.16.5 Build 151
|Windows Agent Build Version:|| 6.16.5 Build 150
|Important Upgrade Notice|
Customers upgrading from Server Backup Manager 6.2.2+ can upgrade as normal, otherwise please upgrade to 6.2.2 first.
|MySQL and Virtuozzo|
If you have MySQL in Virtuozzo containers you will need to upgrade both the agent and the Backup Manager to address issues with recovery.
- Support for MySQL 8.0.30 and higher versions is added with this SBM release.
- Support for installation of R1Soft cPanel plugin on Jupiter theme. For more information on how to cPanel Plugin with Jupiter Theme, click here.
- A new server backup agent version is released to include support for Debian 11/5.10 kernels. All systems running 5.10 kernels should be updated to the latest agent version 6.16.5 Build 150
- Bugfix: With the new server backup agent, we have addressed a few critical issues related to kernel v4.18 which caused server hangs or crashes during backups & restore
You must have an active maintenance account to upgrade. If you have a perpetual (non-heartbeat) license, it will be converted to a heartbeat license automatically upon upgrade. Make sure your SBM has https connectivity to complete this conversion. During the upgrade, you will be notified of the change and will be given the option to revert back to the previous version you were running.
|Component upgrade order|
Server Backup Manager software must be upgraded in the following order:
Data Center Console
After completing the software updates in environments using Data Center Console, R1Soft recommends refreshing the associated SBM server data for proper syncing. In some cases a restart of the SBM service may be necessary.
Backup Agent software requires a glibc version of 2.5 or greater.
|GNU C Library Compatibility|
Backup Agents on Linux distributions with older glibc libraries should remain on version 5.10 or earlier versions of the Backup Agent software. Refer to the following Knowledge Base article for more information: Error - Linux cdp agent fails with "Floating Point Exception".
Kernel Modules Update
We are pleased to announce that BETA support for 4.18 kernels is available again with the latest server backup agent. With the release of the v6.16.4 agent, it is no longer necessary to downgrade the agent to protect 4.18 kernels. Our immediate next priority is to provide support for Ubuntu 22.04/5.15 kernels and Debian 11/5.10 kernels along with automated module builds for 5.4s & 4.19s. Please watch this space in the coming weeks for updates & timelines related to these kernel versions. We thank you for your patience.
: Modules for 4.19 kernels (Debian 10) and 5.4 kernels (Ubuntu 18.04/20.04) have been promoted to stable this week.
Modules for these kernel versions will now build automatically from the protected machine by restarting the agent, or running "serverbackup-setup --get-module".
We are now working towards adding support for 5.10/5.11 kernels to the upcoming 6.16.5 release. A release timeline should be available within the next 2 weeks.
: We are tentatively targeting the last week of January 2023 for the GA release of Server Backup Manager 6.16.5.
The 6.16.5 release is expected to introduce support for 5.10/5.11 kernels, as well as multiple product fixes and improvements.
The next update will be provided the first week of 2023
: We are targeting the week of January 23rd for the release of Server Backup Manager 6.16.5
This release will introduce beta support for 5.10/5.11 kernels, will contain multiple product fixes and improvements, and will provide the necessary framework for future support of 5.13/5.14/5.15 kernels.
: Our team is currently focused on adding support for 5.13/5.14/5.15 kernels, which will require an update to the backup agent.
We are currently working on the plan for the next release, which is also expected to include a few bug fixes. A timeline will be provided soon.
: Modules for 4.18 kernels (Alma Linux 8, Rocky Linux 8, CloudLinux 8, RHEL 8) have been promoted to stable this week.
Modules for these kernel versions can now be built automatically from the protected machine by restarting the backup agent, or by running "serverbackup-setup --get-module".
Our team continues to work towards adding support for 5.14/5.15 kernels.
: Our team is currently focused on correcting an issue preventing modules for 3.10 kernels and certain 4.18 kernels from auto-building from the backup agent.
We continue to focus our efforts on adding support for 5.14/5.15 kernels. We are hoping to share a timeline very soon!
: Work is progressing towards adding support for 5.15 kernels. The upcoming release of Server Backup Manager v6.18 will contain the framework enhancements necessary for new module support.
A 5.15 beta module release will follow soon after v6.18 is released, and a 5.14 beta module will be released soon after 5.15 beta modules are ready.
We are targeting the month of May 2023 for the release of Server Backup Manager v6.18. This release will coincide with the release of a new BETA kernel module for 5.15 kernels. A release of a BETA kernel module for 5.14 kernels will soon follow.
Additionally, at this time we are temporarily disabling automatic builds of 4.18 kernel modules for CloudLinux operating systems, while we address a performance concern.
In the meantime, pre-built CloudLinux modules can be found on the R1Soft Repo, and can be individually requested by opening a new Support ticket.
:We are tentatively targeting the third week of May 2023 for the release of R1Soft Server Backup Manager 6.18.
This release will bring long-awaited updates to core SBM components, include important Security updates, and will introduce the framework necessary for 5.15 kernel support (Beta). Beta support for 5.14 kernels is expected soon after.
: We are planning to release R1Soft Server Backup Manager 6.18 the week of May 22nd. This release will contain bug fixes, security updates, core component updates, and BETA support for 5.15 kernels.
Beta support for 5.14 kernels is expected to arrive soon after.
Common Problems and Issues
- When Disk Safes grow larger than 32TB, backups fail with : "reset(): database or disk is full", even when sufficient storage space is available.
- If you are using DCC to centrally monitor and manage SBMs and you see a loss of synchronization post upgrade to SBM 6.16.2, as a first step click Refresh SBM on the DCC UI to force fetch the updates. If the issue remains, "Restart" the SBM that is failing to sync.
- File search will run forever for protected machines with EFI partitions, the search will get stuck in /boot/efi. As a workaround, specify the starting path for the search.
- Multipath I/O storage configurations with duplicate device UUIDs are not supported and have been found to not function in some configurations.
- When upgrading DCC and SBM from 5.6.2 to 5.12+, associated SBM servers may appear "offline". To resolve this issue, remove and re-add the SBM servers.
- The merge task may fail when you attempt to merge a recovery point that was interrupted due to CLOB/BLOB issues. If the merge fails due to this issue, merge the points one by one. After the merge fails, merge each recovery point one by one until the task is successful.
- Users who have Debian installations should note that a Debian-based machine that has both RPM and DEB packages installed is recognized as RPM-based only by Server Backup Manager.
- Users cannot set heap memory for more than 1024 MB using serverbackup-setup -m. Even if there is more than enough space, Server Backup Manager displays a message indicating that memory specified must be smaller than 1024 MB. Edit server.conf to update memory settings.
- Users who have more than 1,500 mount points per device may experience performance issues during a file restore or when attempting to browse recovery and archive points.
- Server Backup 5.2.x and later do not support remote deployment to a Windows 2003 server because installing drivers requires user interaction..
- Users who use Centos LiveCD on any bare-metal restores that have LVM may receive error messages that they failed to restore deltas to a certain device number. This issue results from the fact that because the Centos LiveCD uses its own LVM to manage its file systems, dm-0 and dm-1 are always in use. To avoid this issue, choose a target that is two LVM devices higher than the original. For example, if /tmp is originally on /dev/dm-1, you should choose /dev/dm-3 as your target.
- RecoveryCD versions 4.2.0 and 5.0.0 do not work when performing a bare-metal restore to a target that includes mdadm and LVM devices.
- Some users may receive an exception error message when attempting to exclude a large group of files in a directory when performing a restore.
- Microsoft Exchange Server 2007 users cannot restore a mailbox database to an alternate location.
Data Center Console issues
- When editing a Policy in DCC, the Recovery Point limit set at the Volume level can be exceeded. Volume objects must be refreshed in order to see the limit changes. This is done by refreshing SBM or by editing and re-saving the Volume.
- The Data Center Console displays incorrect values in the Compression Type section of the Edit Disk Safe window after editing the values.
- Internet Explorer users who enable the Compatibility view may notice graphical display issues when attempting to view some Data Center Console pages, such as the Disk Usage, Volumes, and Policies pages. This issue does not occur in other Web browsers.
Parallels Virtuozzo user issues
- Not all files are downloaded for a container on Virtuozzo and system displays the error message, ERROR - Failed to download file, in the log file.
- Parallels Cloud Server 6.0 users may notice that Virtuozzo virtual machines are not detected as devices in the Disk Safe wizard. The VMs are backed up. If you need granular file restore for a VM, you can back up the PCS VM by installing a Server Backup Agent in the VM guest operating system.
- Virtuozzo users may notice that when some containers are downloaded, the compressed file does not contain the first and last container listed in the recovery points.