Jonathan Harris, Cobweb Solutions Architect, writes …
Yes, running business workloads in a public cloud such as AWS or Azure has many benefits – such as not having to worry about:
However, it won’t protect from a badly architected solution!
There are a number of areas that could cause you sleepless nights. The item I would like to concentrate on in this article is Disaster Recovery.
Yes, the infrastructure in the cloud is fault tolerant and, yes, backup repositories are stored in three places in the local region (LRS) and even replicated to another region (GRS). However, this doesn’t mean that you can rest easy in the event of a disaster like a fire, such as the one in Google’s data centre in Strasbourg.
One of the benefits of running workloads in the cloud is that Disaster Recovery planning and testing is much easier than it was before.
Gone are the days of buying duplicate hardware and restoring data from tapes. Now workloads can be replicated to different parts of the world, tested in isolation from the live environments and then removed. This allows for the time to be up and running after a disaster (Recovery Time Objective) to be tested, and also the point at which the data is restored from (Recovery Point Objective). For Microsoft Azure, this function is provided by Azure Site Recovery.
This doesn’t mean that evoking a disaster plan should be actioned immediately. Moving workloads from one region to another comes with its own unique challenges, particularly around Public IP addresses. Public IP addresses are not transferred when an environment is replicated from one region to another, and therefore consideration must be made for anything referencing public IP addresses directly and not using DNS names. Azure Traffic Managers can help here.
Cobweb can help with Disaster Recovery planning and testing, to uncover requirements for RTO and RPO, and help build Disaster Recovery plans.
Call 0333 009 5941 or email firstname.lastname@example.org for more information.