Backing up your VMs in the Cloud

Blog : FinOps news article

Data redundancy was the topic of March 2021,

At the beginning of the month, OVHCloud suffered a major production incident in their Strasbourg datacenter, with one datacenter going up in smoke and a second half. The communication from the cloud provider was clear and fast, get your disaster recovery plan underway as soon as possible.

This means for technical managers to restore and resume operations based on temporary measures adopted to meet normal business requirements after an incident. But it has to be well planned in this sense.

For some it was too late and their data was gone.

This kind of incident makes us realize that despite many layers of virtualization, the Cloud is not magic and remains something very physical, very material. Datacenters are places that accumulate many dangers: high-voltage electricity, heat, dust, these are various and varied risk factors. That's why they are very secure and closely monitored.

OVHCloud is not the first nor the last to suffer this kind of incident, other examples can be cited like in Tokyo in 2017.

We did a little exercise where we compared the different levels of redundancy and security that our cloud providers in the market can offer on virtual machines to get a common base.

Backing up your VMs in the Cloud

Snapshots in AWS

You are an AWS customer so you use Amazon EC2 with associated disks, Amazon EBS, your persistent disks for your machines. There are several types and several levels of service of EBS, see our article on the subject, AWS offers you to set up automatic backups (snapshot) on a Bucket S3 that you configure according to your needs.

On the price side, the costs of a snapshot are only charged on the data that has changed, which guarantees you to pay in an optimized way. You must also take into account the cost of storage on S3 which is a cheap solution.

In terms of security and redundancy, you can rest easy too because Amazon S3 which is almost 15 years old is a very well redundant product, sold for its ability to never lose anything, it is replicated 3 times each time and ensures an unbeatable availability.

You even have the option of lowering the level of availability service to pay as little as possible, for example with Amazon S3 Unizone.

If your EC2 disappears or the area is unavailable, S3
is there.

Another product that exists to provide you with insurance on an even more massive scale is Amazon Backup, it's the big guns. You can do it on a much larger scale
EFS, EBS, DynamoDB,

Azure and backup

As an Azure customer you are using the Virtual Machine product and there are several options available to you to backup your data. There is also a clear separation between the storage disk for data and the instance for data processing. The disks can be backed up to either:

  

  1. With a Quick Snapshot, you manage the location either in the same region or in another one. You only pay for the data stored, not the space provisioned as on disk. This is the easiest and fastest way to secure test or development data.
  2. You can go to the option of redundancy on managed disks for something more managed.
    2 possible configurations of your replicas either being interzone
    (ZRS) which gives you the most security but you can't require strong technical constraints, because replicated in another region, or you replicate but locally only (LRS) for a faster reaction time by staying in the same geographical the same geographical area in case of incident and it costs you less.
  3. Azure Backup is the product for production. Your VMs are sure to be well backed up, easy to use, all in one place, we offer ease of use and different levels of replication, either in the same datacenter, in the same region, or in multiple regions.

This leads to different price levels and service levels. The pricing is dependent on the number of instances backed up and the storage used.

If you want to think bigger and stronger in terms of security. Azure Site Recovery is another possibility to make backups on a larger scale, for example an entire application.

 

For Azure you have many options in front of you, certainly the provider that offers the most. It's a bit complicated to find your way around and to know what you need, the analysis must be fine and can change over time depending on the number of machines, a budget line is to be foreseen because there is an additional cost to bear.

discs at GCP

The product at Google to create VMs is "Compute Engine", it can be treated as equivalent with the 2 previous ones.

It's the same concept, a compute server and persistent drives, same thing you have different options in front of you to ensure redundancy and security. Only SSDs are 100% local and intended to be local in their design.

You have zonal and regional persistent drives, where a region is split into multiple zones. A multi-region split is more efficient and costly than zonal.

You can also use Cloud Storage to store your data, the very reliable and useful product if you need to store cold data. It will be cheaper than disks but the performance and latency level will be lower.

Cloud SQL has the same replication option as Google instances.

The OVHCloud case

In the OHV cloud, backups are of course part of their offer, both in bare metal and in the
dedicated private cloud, public cloud or web hosting.

On the public cloud for example, you can activate backups: either a backup of the instance in image format, which allows you to restart your VM identically, this corresponds to the images with other providers,
Or a backup of the volumes, which contains only the stored data.

The 2 possibilities are replicated 3 times, at 0,01 € HT/month/Go

These are options to enable for each VM you deploy in the cloud and you can rest easy about your data.

On the dedicated private cloud offer, backup is even included in the price for the first 500gb, you still have to configure the option for it to work.

This topic has been taken care of for all providers, there are many solutions (maybe too many) at different level of costs and level of service quality.

Some can be set up quickly, others are full-fledged products capable of ensuring the backup of an entire infrastructure and associated data.

As a customer, you will need to think carefully about the trade-offs for your different environments between cost, data availability or level of redundancy to make your choice.

It means asking the question of the level of criticality of my service, how long do I want it to be up and running again? What is the amount of data to be stored and the associated traffic? This has an impact on the price; how many replicas and in what region/zone do I require for my backups?

The key is to be aware that the risk of data loss or inaccessibility can occur and must be taken into account.

Choosing to do nothing is already a decision.

Leave a comment

Test Lota.cloud for free for 30 days