Service Management 360

OpenStack: Bringing the open philosophy to cloud computing

OSLC-cloud4It’s clear that cloud has become a foundational platform for service delivery. Going forward, that trend is only going to continue. It’s really only a question of the specifics—whether organizations will get the best use of cloud, given that technology enabling cloud delivery is very much still a work in progress.

At IBM, we’re interested in accelerating that process. We’ve steadily been bringing more capabilities to cloud in a prioritized way that matches what organizations need most—and are expected to need, in the near future. And we’re pursuing that agenda guided by a basic set of principles.

For instance, we have long been champions of the concept of open: open standards, open development methodology, and of course, open source code. Need an example? Think of all the ways Linux has transformed data centers as a foundational technology, and how Linux has always been developed in an extraordinarily open way, so that it’s owned and controlled by no single vendor. IBM is among the world leaders in Linux support and development.

This same principle is driving much of what we’re doing in cloud—perhaps best embodied by the fact that IBM is a founding Platinum member of the OpenStack Foundation. What Linux did for operating systems, OpenStack is doing for cloud.

Comprised of contributions from engineers and organizations all over the world, OpenStack is on the path to become the ubiquitous platform for managing cloud infrastructure. It’s already a great platform for service delivery and it’s getting better by the day.

OpenStack’s storage is rapidly maturing

My particular area of interest in OpenStack development lies in storage management. It’s an area that deserves more attention than it typically receives. Many IT professionals, even now, think primarily of server technology when they think of cloud—hypervisors, image libraries, provisioning. But storage is equally critical; without persistent storage, there would be no data to analyze, no records of items bought, no memory of a client’s profile. Just as processors and memory should be allocated in the smartest possible way in a cloud, so too should data storage.

You could even make the case that because data-centric challenges are emerging and changing more rapidly than others, storage management is one of the most critical aspects of cloud to get right.

Among other such challenges:

• Data volumes are ramping up at an incredible pace. If OpenStack is to be a foundational cloud package, it must include leading storage management capabilities guided by established best practices. Rather than reinvent the wheel, we need to take what we’ve learned and apply it in the best way we can. For instance, it is important that we be able to provision the right type of storage for a particular workload, and when the needs of the workload change, to be able to change the underlying storage. Since OpenStack is a cloud, however, all of this needs to be done via very simple self-provisioning interfaces with no administrator involvement. And since different workloads have different needs, OpenStack has the advantage of allowing a cloud administrator to easily integrate a wide range of existing storage solutions, without needing to buy a new storage system.

• Organizations are more dependent on data than they’ve ever been. The consequences of lost data have always been daunting; today they’re practically terrifying. The total impact created by data that is even temporarily missing from the infrastructure can be devastating in many cases. And in a cloud context, where so many functions are automated, it’s obviously critical that key tasks like backup and recovery be executed not just automatically, but in the fastest, most reliable way possible. Since there are very few green field organizations, in addition to automating the data protection process, OpenStack needs to be able to do so using the organization’s choice of specific data and storage management tools.

• Restoring cloud services following a data center disaster—for example a flood or an earthquake—requires spanning multiple geographically distributed sites. This could be two sites running private OpenStack clouds or a hybrid configuration with one site in a public cloud or perhaps even two public or hosted clouds. Data obviously must be replicated in these multiple locations, and this replication must occur at the right frequency for the workload, as the cloud applications generate new data. But recovering from a disaster is about much more than just bringing back data per se. It’s also about bringing back the cloud infrastructure that uses all that data—the virtual servers, for instance, that include operating systems, middleware, drivers, applications and other software elements. And furthermore, the infrastructure should be restored in a way that matches the organization’s context. If certain services are deemed more critical, the infrastructure driving those services should be able to recover more quickly. Whatever the organization’s context, OpenStack should be able to handle it.

In short, to be a mature foundational cloud platform OpenStack should:

• Be entirely open and transparent in development

• Allow integrating a user’s storage systems of choice and allow self-provisioning from these systems

• Support proven best practices in data and storage management

• Allow seamless integration with existing enterprise data protection mechanisms

• Easily support an organization’s unique needs to survive a disaster, including cross-site replication

A promising future

Written out like that, it seems a tall order. But that’s what we’re pursuing at IBM, along with other members of the OpenStack Foundation, to make OpenStack the very best platform it can be.

To be clear, it’s not entirely there yet. OpenStack is evolving. But the evolution is already fast, and rapidly getting faster. At IBM, we’re playing a big part in that story.

For instance, did you know that of all data and storage management solutions on the market today, IBM had the first platform to enable storage assisted data migration for OpenStack? Data migration is a building block and administrative function that allows changing the physical placement of storage volumes. OpenStack by default performs this function by copying data through the servers from the source to the target. However, if the underlying storage system knows how to migrate the data, OpenStack can offload this function to the storage, improving responsiveness, and saving host cycles and network bandwidth. Now, data migration is near instant and nondisruptive, so application performance is easier to manage.

Did you know that of all enterprise backup/recovery solutions on the market today, the very first one for which drivers were included in OpenStack was IBM Tivoli Storage Manager?

We’ve also gone the extra mile in ensuring that our own storage solutions are compatible with OpenStack, so that organizations that have already made that investment can deploy OpenStack without concern for whether their storage hardware can make the jump.

Over time, IBM will continue to play a leadership role in integrating increasingly advanced and sophisticated storage management capabilities into OpenStack.

Our goal is to ensure that OpenStack storage is managed in the smartest, most efficient, and most flexible way it can be—always respecting open source ideals, but also integrating with existing enterprise systems and processes, to create as much value as possible.

That’s what I’m working on. Let’s collaborate on the cloud and build a smarter planet.

 

 

Tags: , , ,

Share your thoughts with us!