Veeam 8 has been out now for a while, and has received it’s first patch, too. I’ve been running some tests using the new/improved Cloud Connect functionality, which allows you to send backups or backup copies to a repository located at a service provider. To reduce network traffic between sites, I’m using WAN Accelerators at both sides. The basic setup is, that I’m running backups to a VBR server at the remote site, and then a backup copy of that to the Cloud Connect repository over a WAN. You can run straight backups to the Cloud Connect repository (it acts as any other repository), but then you can’t leverage WAN Accelerator (only available for Backup Copies and Replication (and only with an Enterprise Plus license).
Having run the first full backups to the remote VBR server’s local repository, I set up the Backup Copy job. The way it works is, it looks for the latest restore points in the defined repository / job, and when it finds them, it starts moving them over to your specified destination (while adhering to the schedule you set). After that, if you’ve set it to ‘Continuous’ mode, it’ll go idle, and wait for new restore points to appear, and then move those, and so on.
What I had not taken into account, is that I had defined multiple backup jobs, which I thought were identical, and then added those to the backup copy job. After a while, I noticed that some of the jobs were not being processed by the copy job. The VM’s in those jobs were listed as “Pending” in the backup copy job. After a short investigation, the reason was that one of the jobs had a storage optimization setting of “Local”, while the rest had “LAN”. This setting affects the block size of the backups (ranging from Local (8192 KB) to WAN (256KB).
I wasn’t cognizant of this limitation in Backup Copy jobs. All VM’s being processed have to have identical storage optimization settings. You can’t mix Local and LAN, or LAN and WAN.
The easiest fix for me (though maybe not the most optimal) was to create another backup copy job, add the jobs with “LAN” optimization to that job, leaving the one Local job to the original backup copy job. Or, as suggested in one of the posts in the Sources section, you could delete appropriate backups, change the settings in the backup jobs, and then allow the backup copy job to work its magic.
After the second job was enabled, it immediately started copying over the latest restore points to the Cloud Connect repository.
TL;DR: When creating Backup Copy jobs, make sure all the jobs included are using the same storage optimization settings (while creating the job, on the storage page -> advanced -> storage tab).