Fundamental technology shifts will force CIOs to rethink their approaches to storage, says NetApp’s Dan Warmenhoven.
Misunderstanding is rife about the real value of storage in a modern IT infrastructure. The conventional wisdom sees storage as an extension of the server environment, acting as an add-on piece of hardware.
In fact, what really makes the difference in today’s IT environments is sophisticated software that enables IT professionals to switch their focus away from disks and capacity and onto data management issues and tight integration with server-based applications and services. CIOs must be able to make that shift if their companies are going to keep up
with trends in data management.
Many approaches are emerging to help organizations take a big “blob” of data — so-called big data — and mine it to gather intelligence in a more cost-effective and high-performance way. The most promising of these is Hadoop, but it presents a big challenge for storage infrastructure and data management specialists. One of our major challenges at NetApp is to figure out how to make the data management services we’ve designed to run on our infrastructure suitable for Hadoop environments. That might involve taking some of the open source code that analyzes data, for example, and putting it to work inside our storage systems, instead of the usual approach of bringing all that data back into the server farm for processing.
Another emerging trend is global namespace scale-out storage architectures. For the past 20 years, the storage world has been dominated by the concept of the disk volume as a unit of capacity. As volumes of data grow ever larger, however, businesses need to be able to build an infinitely scalable “pool” of storage — one great big ocean you can throw everything into, which scales out instead of up.
In order to retrieve storage objects from that ocean, however, you need a different approach to file management. This is where the global namespace comes into play, providing a federation of file systems and many storage devices to allow access to data, regardless of its physical location. In much the same way that Domain Name Systems (DNS) provides a network directory, a global namespace provides a directory service for storage objects.
Finally, we come to cloud computing. The future of cloud lies in customers taking applications from their own environment and moving them onto a service provider’s infrastructure. For that to happen, though, those customers need to start virtualizing their own application environments.
So how should CIOs prepare their storage architectures for this future? My advice would be to ensure those architectures are extremely flexible. The first step is unified storage, which imposes some degree of uniformity across the storage infrastructure. The second step is to think about cloud computing not just as a short-term fix but as a long-term strategy.
CIOs that look to the cloud to liberate them from IT asset management and free them up for application development and business-value creation are the ones who will be successful. But if you’re a server-hugger, be warned: your days are numbered.
Dan Warmenhoven is executive chairman of networked storage solutions company NetApp, where he is responsible for building and expanding the company’s relationships with key strategic partners, including Global System Partner, Fujitsu.