All SOBR and job settings were at their defaults, except as noted the backup file compression was enabled (which in theory adds a bit more CPU load). There was a single Backup Job to SOBR ( Scale-Out Backup Repository) made out of extents provided by this server. You can also use 16TB NL-SAS drives which will increase total server capacity to 768TB and provide equal performance. The encryption was also enabled in order to test the Max CPU load, and during the peak performance, it was running at around 70%. There are two volumes in SOBR = 576TB total capacity. The appliance has two P408i RAID controllers, each controller had 28X12TB NL-SAS drives in RAID 60 configuration. All-in-one” Veeam appliance, so all Veeam components ran on this single box, including one Backup Proxy.
Veeam cdp windows#
Note: 45 VMs with a total of 5.5TB used space + based on Windows ReFS, backup encryption enabled + per-VM backup chains enabled. Windows Server 2019 with ReFS 64KB cluster size Basically, you are guaranteed to achieve these numbers on this specific hardware, if your source can keep up (remember backup chain speed depends on the data source, proxies, network, and repositories).Ģx Intel Xeon Gold 6252 CPU 2.1GHz (24 cores each)Ģx RAID-60 with 128KB strip size on 2x (12+2) + 2 hot spares (575TB usable) So in this lab, in absolute numbers, Veeam was able to peak at 11.4 GB/s backup speed (bytes not bits) with a single all-in-one backup appliance! It’s pretty incredible what our v11 engine is capable of, right? Of course, this does not mean such performance can only be achieved on HPE hardware, as Veeam is completely hardware-agnostic. In the final test Veeam did, Source was still the bottleneck. The results came of close collaboration between Veeam and HPE that was able to throw 100Gb/s at Veeam’s backup server (HPE Apollo 4510 Gen10 server). The processing rate is lower at 10.3 GB/s because it includes “dead” time of job initialization when no data transfer is happening. Here’s the screenshot from the run where 11.4 GB/s backup speed was reached.
Veeam cdp full#
So they had to implement full NUMA awareness and enhance their Data Movers placement logic to ensure they never end up on different CPUs. Finally, once this was all behind as mentioned by Anton, Veeam ran into compute becoming the primary bottleneck on multi-CPU servers. To get there, Veeam had to make many changes even to the most basic stuff – for example, how Veeam writes backup files content to storage! Veeam also had to revisit their shared memory transport engine that passes data between source and target data movers running on the same box – incredibly, at these performance levels, even modern RAM speed can become a bottleneck if you don’t work with it optimally. But if it was more of evolution before, v11 is the revolution because it more than doubles the backup performance per appliance. Every release Veeam kept enhancing the engine for this deployment scenario (general-purpose servers). One area where v11 really changes the game is all-in-one backup appliances: deployments where all Veeam components are installed on a single server with lots of disks. The run where 11.4 GB/s backup speed was reached… Then, Veeam improves the engine in version 11, and Anton Gostev (SVP, Product Management Veeam) gave one interesting feedback in his weekly post. As usual, the first thing Veeam will do is share the RTM build with all of their partners, to give them some time to get acquainted with the new versions before they become generally available in a few weeks. Veeam is getting really close to shipping their v11 platform update.