When deciding on a storage architecture appropriate for your organization, there are many questions a CIO wants answers to. These include: is it built to scale; the security of data; the speed at which data can be accessed ensuring performance is not hit by data bottlenecks; total cost of ownership in the face of shrinking IT budgets; and the pressure to control costs and increase efficiency.
Technological factors such as compression, deduplication of data, input/output capability of the back-end, throughput performance at the front-end, efficient utilization of capacity and automated storage tiering will help inform these answers and have to be weighed up when making a decision.
However, as IT platforms change and trends such as cloud gather pace and storage technologies evolve, it is advisable to focus on a few apposite questions to focus the organizational mind on what storage architecture to invest in.
It is also important to consider whether any storage architecture suits the unique organizational needs – whatever they are currently and might be in the future.
A fast, simple, scalable and complete storage solution is the foundation for today’s data-intensive enterprise.Download
Most organizations have mixed systems with legacy architectures that have grown up over the years and have to manage a heterogeneous environment.
Organizations are not in a position to decommission existing systems and they want to be able to take advantage of the latest developments in storage technologies. Any new storage architecture must be able to work with what is in place already.
In mature organizations there will be competing demands on storage from diverse systems and different business units that need access to data around the clock. No one application should suffer at the expense of the other in a complex business and operational environment.
Clive Longbottom, service director at analyst group Quocirca, says a key question to ask when deciding on storage architecture is: How does your storage deal with mixed workloads?
He says CIOs should ask any prospective vendor, “Can I throw databases, files and other workloads at your array and be guaranteed that all workloads will be dealt with as per my needs, and that one workload won't impact the performance of any other workload?”
Mike Matchett, a senior analyst and consultant at Taneja Group, predicts the future of data storage architectures in 2016 will build on software-defined products, which took off in 2015.
“I predicted a great future for software-defined products and the hyper converged appliances they enable. But I will admit that the phrase ‘software-defined’ has lost much of its importance as every marketing genius reasons that their product includes software, so it must also be software-defined. Still, our research shows that more than 30% of enterprise respondents are envisioning hyper convergence as their future datacenter architecture,” he says.
Longbottom says another important question to ask when deciding on storage architecture is: How do you deal with failure? Storage architectures underpin an organization’s IT environment and are critical to the smooth running of the organization and its ability to do business and remain competitive. As organizations grow and become more reliant on ever-increasing data, structured and unstructured, they do not want a failure to impact the organization.
Preparing for failure
In a global economy, collaboration is important and data needs to be shared, which means reliability and disaster recovery, as well as security and data governance are critically important.
Failures are inevitable, but Longbottom says that asking how they are dealt with can help an organization make the right architectural choice.
“Is dealing with failure down to a backup and restore policy, live mirroring or a fail-in-place strategy, where your array, and your larger storage name space can cope with multiple failures in one array without the failure of the overall storage system?” he says.
No organization wants to risk being locked into a proprietary platform that cannot support future developments in storage. Longbottom suggests CIOs ask any prospective vendors: How do you deal with the future?
“The leap from magnetic to flash-based systems has been a major step in improved performance,” he says. However, there are other technologies already here or just around the corner that an organization may want to consider.
Examples include Micron 3DX-Point, NVMe (non-volatile memory express) PCIe, (peripheral component interconnect express) and DIMMs (dual in-line memory model).
Longbottom recommends asking, “Will I need to do continuous forklift upgrades as these dynamics happen, or will your strategy enable me to continuously embrace such change?”
Cost and efficiency
When it comes to making a decision based on the answers to questions about how a storage architecture deals with mixed workloads; failure and the future, the question of cost and efficiency should inform any response.
Taneja Group’s Matchett forecasts that organizations will ensure that operational expenditure is a critical consideration for any choice in storage architecture.
“The operational expenditure of storage – including provisioning, troubleshooting, maintenance, upholding availability and performance SLAs, migration/transitions, ensuring compliance, and so on – will become a more important investment consideration,” he says.
He points out that storage media prices continue to fall and capacity efficiencies continue to evolve, while IT is better at tracking and exposing total storage costs as they build their own clouds to compete with outside alternatives.
Organizations are getting more savvy about how a storage architecture should work for them and its business benefits and cost effectiveness; and they need a partner with a sympathetic mindset.
“As IT evaluates new storage products, they are recognizing that the only way to do more while running lean and mean is to demand increasing end-to-end automation and built-in intelligence,” he says.