Backup is an under-loved storage workload. Backup is a classic example of a cost-center; it has a lot of cost and complexity, it's there to save you from disasters, but it doesn’t contribute a dime to top-line revenue numbers for your firm. So careful scrutiny of the cost and complexity of backup workflows is always justified. It was 2008 when DataDomain introduced deduplication, a magic form of compression that could deliver greater than 10:1 capacity savings in backup environments. On a good day, costs were just on-par with tape, but the RPO and RTO made disk-to-disk a no-brainer choice for enterprises.
But computing and storage trends continue to march forward, and even with modern backup solutions, customers are grappling with a variety of challenges related to cost, scalability, administrative hassle, and cloud flexibility.
1) For many customers, cost expectations have changed. DataDomain’s proprietary appliance is too expensive, and there’s no expectation that industry consolidation will provide relief. Dedupe is a pervasive feature now, and further cost reduction will come from technologies like SDS.
2) Scale continues to challenge enterprise customers, who are dealing with racks of discrete D2D systems.
3) Another challenge for backup is the distributed IT environment, including ROBO networks and M&A activities that result in not only dispersed and heterogeneous facilities, but adds administrative issues including complex tracking of DR between sites. A single WAN solution would reduce a ton of overhead.
4) Cloud is attractive due to WAN ubiquity and scalability, so services like AWS S3 are near the top of the list for many customer investigations. But performance and data governance can be a real risk. If RTO falls off a cliff due to WAN latencies, is that the right answer?
5) many ISVs and appliance vendors are introducing tier-to-cloud functionality for cold-data, but tier-to-cloud is a static topology with rigid primary/secondary roles. These solutions don't have the flexibility to support the 'run anywhere' strategy of the modern enterprise! Backup and restore performance should be consistent regardless of where the host application resides.
The answer is a private cloud that spans heterogeneous resources, supports highly dispersed IT environments, and supports public cloud resources while mitigating latency issues. NooBaa has been getting lots of interest in this area for the reasons above as well as being low-cost and very easy to install, operate, and scale. NooBaa can start in a VM environment, and scale as needed to consume local hosts, distributed datacenter and ROBO resources, and public cloud.
This means that for large-scale backup environments customers can maintain a backup-target technology that can span geographies, support public cloud in a bi-directional model, support global deduplication and encryption, and do it all under a single pane of glass.
We’re excited about the prospects and are actively soliciting feedback from backup users.