FeaturedIT topics

IDG Contributor Network: Replication isn’t data protection. Here’s why

As data sets explode, more organizations are turning to disk-to-disk replication as an alternative to traditional tape backup workflows. Unfortunately, the benefits of replication are often outweighed by the risks and operational overhead required to use replication effectively, and replication by itself is not enough to protect your data.

Why do organizations use replication?

Replication often becomes the data protection strategy of choice when legacy NDMP-based backup solutions buckle under the demands of protecting unstructured data at scale. At the hundreds of terabytes or petabyte scale, backup windows run into working hours and impact primary environment performance, interfering with user activity and business function. In this case, the only way to prevent daily negative impacts to the business is to cancel the backup entirely, exposing the organization to the risk of data loss.

In short, the traditional approach of backing up to tape becomes too cumbersome, with costs and operational overhead increasing as data grows large.

To mitigate this situation, organizations often end up choosing a balance between tape and replication—knowing that with replication, they are giving up some of the benefits and failsafes of a true data protection strategy.

What is the difference between backup and replication?

While both backup and replication are, essentially, ways of making copies of your data, there is a key difference. Replication is a singular copy of the data, not a versioning copy. Backup is a versioning copy, which captures your system and its history so that you can revisit your system in the past.

With replication, all changes to your primary data are immediately replicated over to a new location. If you do not use snapshots or another method to build in versioning, you lose the ability to view past version of your system. Even snapshots have their disadvantages that prevent replication from becoming a fully fledged data protection solution.

What are the pitfalls of using replication as data protection?

The problem with snapshots

As mentioned previously, one way organizations mitigate these challenges is by using snapshots to capture versioning. However, the downfall of keeping snapshots is that over time, retaining snapshots consumes large amounts of storage. Eventually, keeping snapshots becomes prohibitively expensive, and once your storage limitation is exceeded, you lose the ability to view your data at a point in the past prior to the snapshots you are able to store. Snapshot management thus becomes its own manual, time-consuming project for anyone using this strategy.

Corruption

Because replication is usually a continuously running process, any issues in the primary copy will also be replicated over. This includes any corruptions or accidental deletions. As mentioned previously, there are additional measures organizations can take to protect against this possibility, such as snapshots. However, snapshots also have their limitations.

Loss of forensics

In the event that you have to restore, you often lose forensics around the problem. Once the data is restored, there is often not enough space to keep what the data looked like before restoration, so the history of what happened is lost.

Lack of visibility

Most replication solutions lack the operational rigor of the traditional backup world. This means that administrators may not notice if replication jobs fail or fall behind. Sometimes, jobs will fall behind by days or weeks without anybody noticing—leaving your data vulnerable to loss.

Restore time with disaster events

Because it is very expensive to replicate and store a second copy of all your data, enterprise IT will typically only replicate one time to an offsite location. If a major disaster strikes at your primary site, it will take an extremely long time to get that data back and restored, especially in high-capacity, unstructured data environments.

Management and monitoring required

Because replication does not capture every version of the data, enterprise IT must monitor their replication jobs closely for errors that may corrupt the data or leave their data vulnerable to loss. One red flag to look out for is when replication jobs suddenly go up in size, indicating a possible corruption event.

Normally, with replication, this corruption is absorbed and shipped over because enterprise IT may not catch this change rate. Monitoring your typical change rate can give you a warning up front that something’s wrong, so you don’t delete your old backups. Building in this level of monitoring is essential to using replication to protect your data.

Operationally, this poses a number of challenges. Every vendor on your primary tier requires a different replication solution, such as SnapMirror or SyncIQ, which means that every monitoring solution must be customized and rebuilt for every piece of the primary tier. One vendor may not have the same level of reporting as another, leading to inconsistent monitoring.

To protect against data loss and corruption using replication requires so much monitoring that organizations may need a whole team just dedicated to looking for failed jobs or risk not noticing them. The benefits above tape that an organization may hope to gain from using replication are often weighed down by the resources they must sink into preventing against data corruption and loss and the elevated risk.

What’s an organization with massive volumes of data to do when their legacy infrastructure can’t handle their workflows, but replication poses too great a risk and drain on resources?

What might a true data protection strategy involve?

It may be time to consider a modern approach to data protection that’s built for data at scale. This means having a data movement engine that, unlike traditional NDMP-based backup, can move massive volumes of data effectively without impacting primary tiers, along with true data protection capabilities such as versioning, easy restore, and automated monitoring. A solution that’s delivered as a service helps to ease the burden of management overhead from enterprise IT teams, reducing TCO.

Replication may seem like a “good enough” fix or even the only data protection solution for organizations that have outgrown their traditional NDMP-based backup and tape workflows, but there are modern alternatives that are cost-effective, simple to implement, and most important, truly protect your data—the livelihood of your business.

This article is published as part of the IDG Contributor Network. Want to Join?

Related Articles

Back to top button