Tuesday, April 13, 2010

Fog Clears on Proper Precautions for Putting More Enterprise Data Safely in Clouds

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today we present a sponsored podcast discussion on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please join me in welcoming Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP. Welcome back, Christian.

Christian Verstraete: Thank you.

Gardner: We’re also here with Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Welcome back to the show, Archie.

Archie Reed: Hey, Dana. Thanks.

Gardner: It strikes me that companies around the world are already doing a lot of their data and applications activities in what we could loosely call "cloud computing," cloud computing being a very broad subject and the definition being rather flexible.

Let me take this first to you, Archie. Aren’t companies already doing a lot of cloud computing? Don’t they already have a great deal of transactions and data that’s being transferred across the Web, across the Internet, and being hosted on a variety of either internal or external servers?

Difference with cloud

Reed: I would certainly agree with that. In fact, if you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Gardner: Christian, it seems as if we have more organizations that are saying, "We can provide cloud services," even though those services have been things that have been done for many years by other types of companies. But we also have enterprises seeking to do more types of applications and data-driven activities via these cloud providers.

So, we’re expanding the universe, if you will, of both types of people involved with providing cloud services and types of data and applications that we would use in a cloud model. How risky is it, from your perspective, for organizations to start having more providers and more applications and data involved?

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Gardner: Archie, it sounds as if we’re in a game of catch-up, where the enticements of the benefits of cloud computing have gotten ahead of the due diligence and managing of the complexity that goes along with it. If you subscribe to that, then perhaps you could help us in understanding how we can start to close that gap.

People are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.



To me one recent example was at the RSA Conference in San Francisco, the Cloud Security Alliance (CSA) came out with a statement that said, "Here’s what we have to do, and here are the steps that need to be taken." I know that HP was active in that. Tell me if you think we have a gap and how the CSA thinks we can close it.

Reed: We’re definitely in a situation where a number of folks are rushing toward the cloud on the promise of cost savings and things like that. In fact, in some cases, people are generally finding that as they realize they have risk, more risk than they thought they did, they’re actually stepping back a little bit and reevaluating things.

A prime example of this was just last week, a week after the RSA Conference, the General Services Administration (GSA) here in the U.S. actually withdrew a blanket purchase order (BPO) for cloud computing services that they had put out only 11 months before.

They gave two reasons for that. The first reason was that technology had advanced so much in that 11 months that their original purchase order was not as applicable as it was at that time. But the second reason, perhaps more applicable to this conversation, was that they had not correctly addressed security concerns in that particular BPO.

Take a step back

In that case, it shows we can rush toward this stuff on promises, but once we really start to get into the cloud, we see what a mess it can be and we take a step back. As far as the CSA, HP was there at the founding. We did sponsor research that was announced at RSA around the top threats to cloud computing.

We spoke about what we called the seven deadly sins of cloud. Just fortuitously we came up with seven at the time. I will point out that this analysis was also focused more on the technical than on specific business risk. But, one of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of the CSA was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Gardner: Correct me if I am wrong, but one of the points that the CSA made was the notion that, by considering cloud computing environments and methodologies and scenarios, you can actually make your general control and management of data improved by moving in this direction. Do you subscribe to that?

Reed: Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

So this seems to make sense, whether you're talking to a CEO, CIO, or a developer. And it also makes sense, whether you are talking about internal resources or going to the cloud. Does that makes sense?

Gardner: Sure, it does. So getting it right means that you have more options in terms of what you can do in IT?

Reed: Absolutely.

Gardner: That seems like a pretty obvious direction to go in. Now, Christian, we talked a little bit about the technology standards methods for approaching security and data protection, but there is more to that cloud computing environment. What I'm referring to is compliance, regulation, and local laws. And this strikes me that there is a gap -- maybe even a chasm -- between where cloud computing allows people to go, above where the current laws and regulations are.

Perhaps you could help us better understand this gap and what organizations need to consider when they are thinking about moving data to the cloud vis-a-vis regulation.

A couple of caveats

Verstraete: Yes, it's actually a very good point. If you really look at the vision of the cloud, it's, "Don't care about where the infrastructure is. We'll handle all of that. Just get the things across and we'll take care of everything."

That sounds absolutely wonderful. Unfortunately, there are a couple of caveats, and I'll take a very simple example. When we started looking at the GS1 Product Recall service, we suddenly realized that some countries require information related to food that is produced in that country to remain within the country's boundaries.

That goes against this vision of clouds, in which location becomes irrelevant. There are a lot of examples, particularly around privacy aspects and private information, that makes it difficult to implement that complete vision of dematerialization, if I can put it that way, of the whole power that sits behind the cloud.

Why? Because the EU, for example, has very stringent rules around personal data and only allows countries that have similar rules to host their data. Frankly, there are only a couple of countries in the world, besides the 27 countries of the EU, where that's applicable today.

This means that if I take an example, where I use a global cloud with some data centers in the US and some data centers in Europe, and I want to put some private data in there, I may have some issues. How does that data proliferate across the multiple data centers that service actually uses? What is the guarantee that all of the data centers that will host and contain my data and its replication and these backups and others are all within the geographical boundaries that are acceptable by the European legislation?

The bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here.



I'm just taking that as an example, because there is other legislation in the US that is state-based and has the same type of approach and the same type of issues. So, on the one hand, we still are based with a very local-oriented legislative body and we are there with a globally oriented vision for cloud. In one way, form, or shape we'll have to address the dichotomy between both for the cloud to really be able to take off from a legal perspective.

Reed: Dana, if I may, the bottom line is that data can be classed as global, whereas legislation is generally local. That's the basis of the problem here. One of the ways in which I would recommend folks consider this -- when you start talking about data loss, data protection and that sort of stuff -- is having a data-classification approach that allows you to determine or at least deploy certain logic and laws and thinking how you're going to use it and in what way.

If you go to the military, the government, public sector, education, and even energy, they all have very structured approaches to the data that they use. That includes understanding how this might be used by third parties and things like that. You also see some recent stuff.

Back in 2008, I think it was, the UK came up with a data handling review, which was in response to public sector data breaches. As a result, they released a security policy framework that contains guidance and policies on security and risk management for the government departments. One of the key things there is how to handle data, where it can go, and how it can be used.

Trying to streamline

What we find is that, despite this conflict, there are a lot of approaches that are being put into play. The goal of anyone going into this space, as well as what we are trying to promote with the CSA, is to try to streamline that stuff and, if possible, influence the right people that are trying to avoid creating conflicting approaches and conflicting classification models.

Ultimately, when we get to the end of this, hopefully the CSA or a related body that is either more applicable or willing will create something that will work on a global scale or at least as widely as possible.

Gardner: So, for those companies interested in exploring cloud it's by no means a cakewalk. They need to do their due diligence in terms of technology and procedures, governance and policies, as well as regulatory issues compliance and, I suppose you could call it, localization types of issues.

Is there a hierarchy that appears to either of you about where to start in terms of what are the safe types of data, the safer or easier types of applications, that allows you to move toward some of these principles that probably are things you should be doing already, but that allow you to enjoy some of the rewards, while mitigating the risks?

Reed: There are two approaches there. One of the things we didn't say at the outset was there are a number of different versions of cloud. There are private clouds and public clouds. Whether you buy into private cloud as a model, in general, the idea there is you can have more protections around that, more controls, and more understanding of where things are physically.

If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.



That's one approach to understanding, or at least achieving, some level of protection around the data. If you control the assets, you're allowed to control where they're located. If you go into the public cloud, then those data-classification things become important.

If you look at some of the government standards, like classified, restricted, or confidential, once you start to understand how to apply the data models and the classifications, then you can decide where things need to go and what protections need to be in place.

Gardner: Is there a progression, a logical progression, that appears to you about how to approach this, given that there are still disparities in the field?

Reed: Sure. You start off with the simplest classification of data. If it's unprotected, if it's publicly available, then you can put it out there with some reasonable confidence that, even if it is compromised, it's not a great issue.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, that data classification that Archie was just talking about is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

Gardner: Perhaps there is an instance that will illustrate what we're talking about. I hear an awful lot about platform as a service (PaaS), which is loosely defined as doing application development activities in a cloud environment. I talk to developers who are delighted to use cloud-based resources for things like testing and to explore and share builds and requirements in the early stages.

At the same time, they're very reluctant to put source code in someone else's cloud. Source code strikes me as just a form of data. Where is the line between safe good cloud practices and application development, and when would it become appropriate to start putting source code in there as well?

Combination of elements

Verstraete: There are a number of answers to your question and they're related to a combination of elements. The first thing is gaining an understanding as much as you can, which is not easy, of what are the protection mechanisms that fit in the cloud service.

Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Gardner: Alright, Archie.

Reed: There are a couple of things or points that need to be made. First off, when we think about things like source code or data like that, there is this point where data is stored and it sits at rest. Until you start to use it, it has no impact, if it's encrypted, for example.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.



So, if you're storing source code up there, it's encrypted, and you hold the keys, which is one of the key tenets that we would advocate for anyone thinking about encrypting stuff in the cloud. then maybe there is a level of satisfaction and meeting compliance that you have with that type of model.

Putting the source code into the cloud, wherever that happens to be, may or may not actually be such a risk as you're alluding to, if you have the right controls around it.

The second thing is that we're also seeing a very nascent set of controls and guarantees and SLAs and those sorts of things. This is very early on, in my opinion and in a lot of people's opinion, in the development of this cloud type environment, looking at all these attributes that are given to cloud, the unlimited expansion, the elasticity, and rapid provisioning. Certainly, we can get wrapped around the axle about what is really required in cloud, but it all ultimately comes down to that risk analysis.

If you have the right security in the system, if you have the right capabilities and guarantees, then you have a much higher level of confidence about putting data, such as source code or some sets of data like that, into the cloud.

Gardner: To Christian’s point of that the publicly available cloud providers are basically saying buyer beware, or in this case, the cloud practitioner beware, the onus to do good privacy, security compliance, and best practices falls back on the consumer, rather than the provider.

Community clouds

Reed: That's often the case. But, also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Gardner: So, for that cloud practitioner, as an organization, if they take those steps towards good cloud computing practices and technologies, it’s probably going to benefit them across the board in their IT infrastructure, applications, and data activities. But does it put them at a competitive advantage?

What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.



If you do this right, if you take the responsibility yourself to figure out the risks and rewards and implement the right approach, what does that get for you? Christian, what’s your response to that?

Verstraete: It gives you the capability to use the elements that the cloud really brings with it, which means to have an environment in which you can execute a number of tasks in a pay-per-use type environment.

But, to come back to the point that Archie was making, one of the things that we often have a tendency to forget -- and I'm as guilty as anybody else in that space -- is that cloud means a tremendous amount of different things. What's important for customers who want to move and want to put data in the cloud is to identify what all of those different types of clouds provide as security and protection capabilities.

The more you move away from the traditional public cloud -- and when I say the traditional public cloud, I’m thinking about Amazon, Google, Microsoft, that type of thing -- to more community clouds and private clouds, the more important that you have it under your own control to ensure that you have the appropriate security layers and security levels and appropriate compliance levels that you feel you need for the information you’re going to use, store, and share in those different environments.

Gardner: Okay, Archie, we’re about out of time, so the last question is to you and it’s going to be the same question. If you do this well, if you do it right, if you take the responsibility, perhaps partner with others in a community cloud, what do you get, what’s the payoff, why would that be something that’s a competitive advantage or cost advantage, and energy advantage?

Beating the competition

Reed: We’ve been through a lot of those advantages. I’ve mentioned several times the elasticity, the speed of provisioning, the capacity. While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.

Gardner: Very good. We’ve been discussing how to manage risks and rewards and proper placement of enterprise data in cloud-computing environments. I want to thank our two panelists today, Christian Verstraete, Chief Technology Officer for Manufacturing and Distribution Industries Worldwide at HP. Thank you, Christian.

Verstraete: You’re welcome.

Gardner: And also, Archie Reed, HP's Chief Technologist for Cloud Security, and the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. Thank you, Archie.

Reed: Hey, Dana. Thanks for taking the time to talk to us today.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for joining us, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on how enterprises should approach and guard against data loss when placing sensitive data in cloud computing environments.Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Wednesday, April 07, 2010

Well-Planned Data Center Transformation Effort Delivers IT Efficiency Paybacks, Green IT Boost for Valero Energy

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power and cooling requirement and concerns.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on the huge drive for improvement around enterprise data centers. Many enterprises, if not nearly all, are involved nowadays with some level of data-center transformation either in the planning stages or in outright execution. The heightened activity runs the gamut from retrofitting and designing new data centers to then building and occupying them.

We're seeing many instances where numerous data centers are being consolidated into a powerful core few, as well as completely green-field data centers -- with modern design and facilities -- are coming online.

These are, by no means, trivial projects. They often involve a tremendous amount of planning and affect IT, facilities, and energy planners, as well as the business leadership and line of business managers. The payoffs are potentially huge, as we'll see, from doing data center design properly, but the risks are also quite high, if things don't come out as planned.

The latest definition of data center is focused on being what's called fit-for-purpose, of using best practices and assessments of existing assets and correctly projecting future requirements to get that data center just right -- productive, flexible, efficient and well-understood and managed.

The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.



Today, we're going to examine the lifecycle of data-center design and fulfillment through migration and learn about some of the payoffs when this goes as planned. We're going to learn more about a successful project at Valero Energy Corp. The goal through these complex undertakings at these data centers is to radically improve how IT can deliver its services and be modern, efficient, and flexible.

We're here with two executives from Hewlett-Packard to look at proper planning and data center design, as well as build and migration. And we'll learn from an IT leader at Valero how they managed their project.

Please join me in welcoming our guests today. We're here with Cliff Moore, America’s PMO Lead for Critical Facilities Consulting at HP. Welcome to the show, Cliff.

Cliff Moore: Thanks, Dana.

Gardner: We're also here with John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. Hello, John.

John Bennett: Hi, Dana.

Gardner: We're also here with John Vann, Vice President of Technical Infrastructure and Operations at Valero Energy Corp. Welcome to the show, John.

John Vann: Hello, Dana. Thanks a lot.

Gardner: Let's go to you, John Bennett. Tell us why data center transformation is at an inflection point, where data centers are in terms of their history, and what are the new requirements. It seems to be somewhat of a perfect storm in terms of there's a need to move, and things still are really not acceptable?

Modern and efficient

Bennett: You're right on that front. I find it just fascinating that if you had spoken four years ago and dared to suggest that energy, power, cooling, facilities, and buildings were going to be a dominant topic with CIOs, you would have been laughed at. Yet, that's definitely the case today, and it goes back to the point you made about IT being modern and efficient.

Data-center transformation, as we've spoken about before, really is about not only significantly reducing cost to an organization, not only helping them shift their spending away from management and maintenance and into business projects and priorities, but also helping them address the rising cost of energy, the rising consumption of energy and the mandate to be green or sustainable.

The issues that organizations have in trying to address those mandates, of course, is that the legacy infrastructure and environments they have, the applications portfolio, the facilities, etc., all hinder their ability to execute on the things they would like to do.

Data-center transformation tries to take a step back, assess the data center strategy and the infrastructure strategy that's appropriate for a business, and then figure how to get from here to there. How do you go from where you are today to where you need to be?

It turns out that one of the things that gets in the way, both from a cost perspective and from a supporting the business perspective is the data centers themselves. Customers can find themselves, as HP did, having a very large number of data centers. We had 85 around the world, because we grew through acquisition, we grew organically, and we had data centers for individual lines of business.

We had data centers for individual countries and regions. When you added it up, we had 85 facilities and innumerable server rooms, all of them requiring administrative staff, data center managers, and a lot of overhead. As part of our own IT transformation effort, we've brought that down to six.

You have organizations that discover that the data centers they have aren't capable of meeting their future needs. One wag has characterized this as the "$15 million server," where you keep needing to grow and support the business. All of a sudden, you discover that you're bursting at the themes.

Or, you can be in California or the U.K. The energy supply they have today is all they’ll ever have in their data center. If they have to support business growth, they're going to have to deal it by addressing both their infrastructure strategies, but probably also by addressing their facilities. That's where facilities really come into the equation and have become a top-of-mind issue for CIOs and IT executives around the world.

Gardner: John, it also strikes me that the timing is good, given the economic cycle. The commercial market for land and facilities is a buyer's market, and that doesn’t always happen, especially if you have capacity issues. You don’t always get a chance to pick when you need to rebuild and then, of course, money is cheap nowadays too.

Bennett: If you can get to it.

Gardner: The capital markets are open for short-intervals.

Signs of recovery

Bennett: We certainly see, and hope to see, signs of recovery here. Data center location is an interesting conversation, because of some of the factors you named. One of the things that is different today than even just 10 years ago is that the power and networking infrastructure available around the world is so phenomenal, there is no need to locate data centers close to corporate headquarters.

You may choose to do it, but you now have the option to locate data centers in places like Iceland, because you might be attracted to the natural heating of their environment. Of course, you might have volcano risk.

You have people who are attracted to very boring places, like the center of the United States, which don't have earthquakes, hurricanes, wildfires and things that might affect facilities themselves. Or, as I think you'll discover with John at Valero, you can choose to build the data center right near corporate headquarters, but you have a lot of flexibility in it.

The issue is not so much access to capital markets as it is that any facility’s project is going to have to go through not just the senior executives of the company, but probably the board of directors. You'll need a strong business case, because you're going to have to justify it financially. You're going to have to justify it as an opportunity cost. You're going to have to justify in terms of the returns on investment (ROIs) expected in the business, if they make choices about how to manage and source funds as well.

So, it's a good time from the viewpoint of land being cheap, but it might be a good time in terms of business capital be available. It might not be a good time in terms of investment funds being available, as many banks continue to be reluctant to loan than it appears.

The majority of the existing data centers out there today were built 10-15 year ago, when power requirements and densities were lot lower.



Gardner: The variables now for how you would consider, plan, and evaluate are quite different than even just a few years ago.

Bennett: It's certainly true, and I probably would look to Cliff to say more about that.

Gardner: Cliff Moore, what's this notion of fit-for-purpose, and why do you think that variables for deciding to move forward with data center transformation of redesigned activities is different nowadays? Why we are in a different field, in terms of decisions around these issue?

Moore: Obviously, there's no such thing as a one-size-fits-all data center. It's just not that way. Every data center is different. The majority of the existing data centers out there today were built 10 to 15 years ago, when power requirements and densities were a lot lower.

No growth modeling

It's also estimated that, at today's energy cost, the cost of running a server from an energy perspective is going to exceed the cost of actually buying the server. So that's a major consideration. We're also finding that many customers have done no growth modeling whatsoever regarding their space, power, and cooling requirements for the next 5, 10, or 15 years -- and that's critical as well.

Gardner: We should explain the notion of fit for purpose upfront for those folks who might not be familiar with it.

Bennett: With fit for purpose, the question in mind is the strategic one of the data center strategy for an organization in particular. If you think about the business services that are being provided by IT, it's not only what those business services are, but how they should be sourced. If they’re being provided out of entity-owned data centers, how many and where? What's the business continuity strategy for those?

It needs to take into account, as Cliff has highlighted, not only what I need today, but that buildings typically have an economic life of 15 to 25 years. Technology life cycles for particular devices are two or three years, and we have ongoing significant revolutions in technology itself, for example, as we moved from traditional IT devices to fabric infrastructures like converged infrastructure.

You have these cycles upon cycles of change taking place. The business forecasts drive the strategy and part of that forecasting will be sizing and fit for purpose. Very simply, are the assets I have today capable of meeting my needs today, and in my planning horizon? If they are, they’re fit for purpose. If they’re not, they’re unfit for purpose, and I'd better do something about it.

Gardner: We're in a bit of a time warp, Cliff. It seems that, if many were built 15 years and we still don't have the sense of where we'll be in 5 or 10 years, we seem to be caught between not fitting into the past but not quite fitting or knowing what the future is. How do you help people smooth that out?

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business.



Moore: Obviously, we’ve got to find out first off what they need -- what space, power, and cooling requirements. Then, based on the criticality of their systems and applications, we quickly determine what level of availability is required, as well. This determines the Uptime Institute Tier Level for the facility. Then, we go about helping the client strategize on exactly what kinds of facilities will meet those needs, while also meeting the needs of the business that come down from the board.

When a customer is looking to spend $20 million, $50 million, or sometimes well over a $100 million, on a new facility, you’ve got to make sure that it fits within the strategic plan for the business. That's exactly what boards of directors are looking for, before they will commit to spending that kind of money.

Gardner: What does HP bring to the table? How do you start a process like this and make it a lifecycle, where that end goal and the reduce risk play out to get the big payoffs that those boards of directors are interested in?

Moore: Well, our group within Critical Facilities Services actually comes to the table with company's executives and not only looks at what are their space, power, and cooling requirements, but what are the strategies of the business. What are the criticality levels of the various mission-critical applications that they run? What are their plans for the future? What are their mergers and acquisitions plans, and so on and so forth. We help them collaboratively develop that strategy in the next 10 to 15 years for the data center future.

Gardner: It was pointed out earlier that one size doesn't fit all. From your experience, Cliff, what are the number one or two reasons that you’re seeing customers go after a new design for the data center, and spend that large sum of money?

Power and cooling

Moore: Probably the biggest reason we're seeing today is power and cooling. Of course, cooling goes along with power. We see more of that than anything else. People are simply running out of power in their data centers. The facilities today that were built 5, 10, or 15 years ago, just do not support the levels of density in power and cooling that clients are asking for going to the future, specifically for blades and higher levels of virtualization.

Gardner: So higher density requires more energy to run the servers and more energy to cool them, but you have a higher efficiency, utilization, and productivity as the end result, in terms of delivering on the requirements. Is there a way for designing the data center that allows you to cut cost and increase capacity or you are asking too much of this process?

Moore: There certainly are ways to do that. We look at all of those different ways with the client. One of the things we do, as part of the strategic plan, is help the client determine the best locations for their data centers based on the efficiency in gathering free cooling, for instance, from the environment. It was mentioned that Iceland might be a good location. You'd get a lot of free cooling there.

Gardner: What are some of the design factors? What are the leading factors that people need to look at? Perhaps, you could start to get us more familiar with Valero and what went on with them in the project that they completed not too long ago.

Moore: I'll defer to John for some of that, but the leading factors we're seeing again are our space, power, and cooling, coupled with the tier level requirement. What is the availability requirement for the facility itself? Those are the biggest factors we're seeing.

Some data centers we see out there use the equivalent of half of a nuclear power plant to run.



Marching right behind that is energy efficiency. As I mentioned before, the cost of energy is exorbitant, when it comes to running a data center. Some data centers we see out there use the equivalent of half of a nuclear power plant to run. It's very expensive, as I'm sure John would tell you. One of the things that the Valero is accomplishing is the lower energy costs, as a result of building their own.

Gardner: Before we go to Valero, I have one last question on the market and some of the drivers. What about globalization? Are we seeing emerging markets, where there is going to be many more people online and more IT requirements? Is that a factor as well?

Moore: Absolutely.

Bennett: There are a number of factors. First of all, you have an increasing access of the Internet and the increasing generation of complex information types. People aren't just posting text anymore, but pictures and videos. And, they’re storing those things, which is feeding what we characterize as an information explosion. The forecast for storage consumption over the next 5 to 10 years is just phenomenal.

Perfect storm

On top of that, you have more and more organizations and businesses providing more of their business services through IT-based solutions. You talked about a perfect storm earlier with regard to the timing for data centers. Most organizations are in a perfect storm today of factors driving the need for ongoing investments and growth out of IT. The facilities have got to help them grow, not limit their growth.

Gardner: John Vann, you’re up. I'm sorry to have left you off on the sidelines there for so long. Tell us about Valero Energy Corp., and what it is that drove you to bite off this big project of data-center transformation and redesign?

Vann: Thanks a lot, Dana. Just a little bit about Valero. Valero is a Fortune 500 company in San Antonio, Texas and we're the largest independent refiner in the North America. We produce fuel and other products from 15 refineries and we have 10 ethanol plants.

We market products in 44 states with large distribution network. We're also into alternative fuel with renewables and one of the largest ethanol producers. We have a wind farm up in northern Texas, around Amarillo, that generates enough power to fuel our McKee refinery.

So what drove us to build? We started looking at building in 2005. Valero grew through acquisitions. Our data center, as Cliff and John have mentioned, was no different than others. We began to run into power,space, and cooling issues.

Even though we were doing a lot of virtualization, we still couldn't keep up with the growth. We looked at remodeling and also expanding, but the disruption and risk to the business was just too great. So, we decided it was best to begin to look for another location.

Our existing data center is on headquarters’ campus which is not the best place for the data center, because it's inside one of our office complexes. Therefore, we have water and other potentially disruptive issues close to the data center -- and it was just concerning considering where the data center is located.

We began to look for alternative places. We also were really fortunate in the timing of our data center review. HP was just beginning their build of the six big facilities that they ended up building or remodeling, and so we were able to get good HP internal expertise to help us as we were beginning our decision of design and building our data center.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space.



So, we really were fortunate to have experts give us some advice and counsel. We did look at collocation. We also looked at other buildings, and we even looked at building another data center on our campus.

The problem with collocation back in those days of 2006, 2007, and 2008, was that there was a premium for space. As we did our economics, it was just better for us to be able to build our own facility. We were able to find land northwest of San Antonio, where several data centers have been built. We began our own process of design and build for 20,000 square feet of raised floor and began our consolidation process.

Gardner: What, in your opinion, was more impactful -- the planning the execution, the migration? I guess the question should be, what ended up being more challenging than you expected initially? Where do you think, in hindsight, you’d put more energy and more planning, if you had to do it all again?

Solid approach

Vann: I think our approach was solid. We had a joint team of HP and the Valero Program Management Office. It went really well the way that was managed. We had design teams. We had people from networking architecture, networking strategy and server and storage, from both HP and Valero, and that went really well. Our construction went well. Fortunately, we didn’t have any bad weather or anything to slow us down; we were right on time and on budget.

Probably the most complex was the migration, and we had special migration plans. We got help from the migration team at HP. That was successful, but it took a lot of extra work.

If we had one thing to do over again, we would probably change the way we did our IP renumbering. That was a very complex exercise, and we didn’t start that soon enough. That was very difficult.

Probably we'd put more project managers on managing the project, rather than using technical people to manage the project. Technical folks are really good at putting the technology in place, but they really struggle at putting good solid plans in place. But overall, I'd just say that migration is probably the most complex.

Power and cooling are just becoming an enormous problem.



Gardner: Thank you for sharing that. How old was the data center that you wanted to replace?

Vann: It's about seven years old and had been remodeled once. You have to realize Valero was in a growth mode and acquiring refineries. We now have 15 refineries. We were consolidating quite a bit of equipment and applications back into San Antonio, and we just outgrew it.

We were having hard time keeping it redundant and keeping it cool. It was built with one foot of raised floor and, with all the mechanical inside the data center, we lost square footage.

Gardner: Do you agree, John, that some of the variables or factors that we discussed earlier in the podcast have changed, say, from just as few as six or seven years ago?

Vann: Absolutely. Power and cooling are just becoming an enormous problem and most of this because virtualization blades and other technologies that you put in a data center just run a little hotter and they take up the extra power. It's pretty complex to be able to balance your data center with cooling and power, also UPS, generators, and things like that. It just becomes really complex. So, building a new one really put us in the forefront.

Gardner: Can you give us some sense of the metrics now that this has gone through and been completed? Are there some numbers that you can apply to this in terms of the payback and/or the efficiency and productivity?

Potential problems

Vann: Not yet. We've seen some recent things that have happened here on campus to our old data center, because of weather and just some failures within the building. We’ve had some water leaks that have actually run into the data center floor. So that's a huge problem that would have flooded our production data center.

You can see the age of the data center beginning to have failures. We've had some air-conditioner failures, some coolant leaking. I think our timing was just right. Even though we have been maintaining the old data center, things were just beginning to fail.

Gardner: So, certainly, there are some initial business continuity benefits there.

Vann: Exactly.

Gardner: Going back to Cliff Moore. Does anything you hear from John Vann light any light bulbs about what other people should be considering as a step up to the plate on these data center issues?

Moore: They certainly should consult John's crystal ball regarding the issues he's had in his old data center, and move quickly. Don’t put it off. I tell people that these things do happen, and they can be extremely, costly when you look at the cost of downtime to the business.

You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications.



Gardner: Getting started, we talked about migration. It turns out that we did another podcast that focused specifically on data-center migration and we can reference folks to that that easily. What is it about planning, getting started as you say, when people recognize that the time might not be on their side? What are some of the initial steps, and how might they look to HP for some guidance?

Moore: We focus entirely on discovery early on. You’ve got to know precisely what you are going to move, exactly what it's going to look like half a year or a year from now when you actually move it, and focus very heavily on the dependencies between all of the applications, especially the mission-critical applications.

Typically, a move like John’s requires multiple, what we call, move groups. John’s company had five or six, I believe. You simply cannot divide your servers up into these move groups, without knowing what you might break by dividing them up. Those dependencies are critical, and that's probably the failing point.

Vann: We had five move groups. Knowing what applications go with what is a real chore in making sure that you have the right set of servers you can move on a particular weekend. We also balanced it with downtime from the end customers, so we’re going to make sure that we were not in the middle of a refinery turnaround or a major closing. Being able to balance those weekends, so we had enough time to be able to make the migration work was quite a challenge.

Gardner: John Vann, did you take the opportunity to not only redesign and upgrade your data center facilities, but at the same time, did you modernize your infrastructure or your architecture? You said you did quite a bit with virtualization already, was this a double whammy in terms of the facilities as well as the architecture?

Using opportunities

Vann: Yes. We took the opportunity to upgrade the network architecture. We also took the opportunity to go further with our consolidation. We recently finished moving servers from refineries into San Antonio. We took the opportunity to do more consolidation and more virtualization, upgrade our blade farm, and just do a lot more work around improving the overall infrastructure for applications.

Gardner: I'd like to take that back to John Bennett. I imagine you're seeing that one of the ways you can rationalize the cost is that you're not just repaving a cow path, as it were. You're actually re-architecting and therefore getting a lot greater efficiency, not only from the new facility, but from the actual reconstruction of your architecture, or the modernization and transformation of your architecture.

Bennett: There are several parts to that, and getting your hands around it can really extend the benefits you get from these kinds of projects, especially if you are making the kind of investment we are talking about in new data center facilities. Modernizing your infrastructure brings energy benefits in its own right, and it enhances the benefits of your virtualization and consolidation activities.

It can be a big step forward in terms of standardizing your IT environment, which is recommended by many industry analysts now in terms of preparing for automation or to reduce management and maintenance cost. You can go further and bring in application modernization and rationalization to take a hard look at your apps portfolio. So, you can really get these combined benefits and advantages that come from doing this.

We certainly recommend that people take a look at doing these things. If you do some of these things, while you're doing the data center design and build, it can actually make your migration experience easier. You can host your new systems in the new data center and be moving software and processes, as opposed to having to stage and move servers and storage. It's a great opportunity.

It's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost.



John talked about dealing with the IP addresses, but the physical networking infrastructure and a lot of old data centers is a real hodgepodge that's grown organically over years. I guess you can blame some of our companies for having invented Ethernet a long time ago. But, it's a great chance to start off with a clean networking architecture, which also helps both with continuity and availability of services, as well as cost. They all come in there.

I actually have a question for John Vann as well. Because they had a pretty strong focus around governance, and especially in handling change request, I'm hoping he might talk a little bit about that process of the design and build project.

Vann: Our goal was to hold scope creep to a minimum. We had an approval process, where it had to be a pretty good reason for a change and for a server not to move. We fundamentally used the word "no" as much as we could to avoid not getting the right applications in the right place. Any kind of approval had to go through me. If I disagreed, and they still wanted to escalate it, we went to my boss. Escalation was rarely used. We had a pretty strong change management process.

Gardner: I can see where that would be important right along the way, not something you want to think about later or adding onto the process, but something to set up right from the beginning.

We’ve had a very interesting discussion about the movement in enterprise data centers where folks are doing a lot more transformation, moving and relocating their data centers, modernizing them, and finding ways to eke out efficiencies, but also trying to reduce the risk of moving in the future and looking at those all important power and energy consumption issues as well.

I want to thank our guests. We've been joined today by Cliff Moore, America’s PMO Leads for Critical Facilities Consulting at HP. Thank you, Cliff.

Moore: Thanks, Dana. Thanks, everybody.

Gardner: John Bennett, Worldwide Director, Data Center Transformation Solutions at HP. Thank you, John.

Bennett: Thank you, Dana.

Gardner: And lastly, John Vann, Vice President, Technical Infrastructure and Operations at Valero Energy. John, I really appreciate your frankness and sharing your experience and I will certainly wish you all in that.

Bennett: Thank you very much, Dana, I appreciate it.

Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: Hewlett-Packard.

Transcript of a BriefingsDirect podcast on how upgrading or building new data centers can address critical efficiency, capacity, power requirement and cooling concerns. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, April 05, 2010

Case Study Shows How HP Data Protector Notebook Extension Provides Constant Backup for Expanding Mobile Workforce

Transcript of a sponsored BriefingsDirect podcast on how data protection products and services can protect against costly data loss with less hassle for users.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today we present a sponsored podcast discussion on protecting PC-based data in an increasingly mobile world. We'll look at a use case -- at Roswell Park Cancer Institute in Buffalo, NY -- for HP Data Protector Notebook Extension (DPNE) software and examine how backup and recovery software has evolved to become more transparent, reliable, and fundamentally user-driven.

Using that continuous back-up principle, the latest notebook and PC backup software captures every saved version of a file, efficiently transfers it all in batches to a central storage location, and then makes it easily and safely accessible for recovery by user from anywhere. That's inside or outside of the corporate firewall.

We'll look at how DPNE slashes IT recovery chores, allows for managed policies and governance to reduce data risks systemically, while also downsizing backups, the use of bandwidth, and storage.

The economies are compelling. The cost of data lost can be more than $400,000 annually for an average-sized business with 5,000 users. Getting a handle on recovery cost, therefore, helps reduce the total cost of operating and supporting mobile PCs, both in terms of operations and in the cost of lost or poorly recovered assets.

To help us better understand the state of the art remote in mobile PC data protection, we're joined by an HP executive and a user of HP DPNE software. Please join me in welcoming Shari Cravens, Product Marketing Manager for HP Data Protection. Welcome to the show, Shari.

Shari Cravens: Hi, Dana. Thanks for having me.

Gardner: We're also here with John Ferguson, Network Systems Specialist at Roswell Park Cancer Institute in Buffalo, NY. Welcome to the show, John.

John Ferguson: Hi, Dana. Thank you.

Gardner: Let's start with you, Shari. Tell me about the general state of the mobile workforce. Are we getting to the point where we're almost more mobile than stationary these days?

Backup increasingly important

Cravens: It's true. We started hearing from our customers a couple of years ago that PC backup was becoming increasingly important in their lives. Part of that's because the workforce is increasingly mobile and flexibility for the workforce is at an all time high. In fact, we found that 25 percent of staff in some industries operates remotely and that number is growing pretty rapidly.

In fact, in 2008, shipments of laptops overtook desktops for the very first time. What that really means for the end user or for IT staff is that vast amounts of data now live outside the corporate network. We found that the average PC holds about 55,000 files. Of those 55,000, about 4,000 are unique to that user on that PC. And, those files are largely unprotected.

Gardner: Of course, we're also in a tough economic climate, and productivity is paramount. We've got more people doing more work across different locations. What is the impetus for IT to be doing this? Is there a real economic challenge here?

Cravens: The economics of PC backup are really changing. We're finding that the average data loss incident costs about $2,900, and that's for both IT staff time and lost end user productivity. Take that $2,900 figure and extrapolate that for an average company of about 5,000 PCs. Then, look at hard drive failures alone. There will be about 150 incidents of hard drive failure for that company every year.

If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year.



If you look at the cost to IT staff to recover that data and the loss in employee productivity, the annual cost to that organization will be over $440,000 a year. If that data can't be recovered, then the user has to reconstruct it, and that means additional productivity loss for that employee. We also have legal compliance issues to consider now. So if that data is lost, that's an increased risk to the organization.

Gardner: I suppose security also plays a role here. We want to make sure that when we do back up, it's encrypted, it's compressed, and we're not wasting valuable assets like bandwidth. Are there economic issues around that as well?

Cravens: Sure. We all have very sensitive files on our laptops, whether it's competitive information or your personal annual review. One of the things that's been a suggestion in the past was, "Well, we'll just save it to the corporate network." The challenge with that is that people are really concerned about saving these very sensitive files to the corporate network.

What we really need is a solution that's going to encrypt those files, both in transit and at rest, so that people can feel secure that their data is protected.

Gardner: Encryption doesn’t necessarily really mean big hogging files. You can do it with efficiency as well.

Cravens: Absolutely, with changed blocks only, which is what DPNE does.

Gardner: I think we understand the problem in terms of the economics, requirements for the data, sensitivity, and the mobility factors, but what about the problem from a technical perspective? What does it take in order to do something that’s simple and straightforward for the end user?

Historical evolution

Cravens: Let me back up a little bit and talk about how we got here. Historically, PC backup solutions have evolved from more traditional backup and recovery solutions, and there are a couple of assumptions there.

One, they employ things like regularly scheduled backups that happen every 24 hours, and sometimes once a week. They assume that bandwidth concerns aren't necessarily much of an issue. This creates some problems in an increasingly mobile workforce. People are generally not regularly connected to the network. They are at coffee shops, at home, or in airports. They're often anywhere but the office, and it's entirely too easy to opt out of that scheduled backup.

We’ve all had this experience. You're on a deadline, it's 10:00 a.m., and your backup window has popped up. You immediately hit "cancel," because you just can't afford the performance degradation on your PC, and it's really not an option anymore. HP has a unique approach to protecting information on desktops and PCs. Some data loss is going to be inevitable -- laptops get stolen or files are deleted -- but we don't think that means it has to be serious and expensive.

The concept behind HP Data Protector Notebook Extension is that we're trying to minimize the risk of that PC data loss, but we're also trying to minimize the burden to IT staff. The solution is to extend some of the robust backup policies from the enterprise to the client environment.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport.



DPNE does three things. One, it's always protecting data, and it's transparent to the user. It's happening continuously, not on a fixed schedule, so there is no backup window that's popping up.

We’re protecting data no matter where the user is -- the home, the coffee shop, the airport. Whether they are online or offline, their data is being protected, and it's happening immediately. The instant that files are created or changed, data is being protected.

Continuous file protection is number one. Backup policies are centralized and automated by the IT staff. That means that data is always protected, and the IT staff can configure those policies to support their organization's particular data protection goals.

Number two, no matter where they are, users can easily recover their own data. This is a really important point. Getting back to the concept of minimizing the burden to IT staff, DPNE has a simple, single-click menu. Users can recover multiple versions of a file without ever involving IT. They don't ever have to pick up the phone and call the Help Desk. That helps keep IT costs low.

Then, also by optimizing performance, we're eliminating that desire to opt out of your scheduled backup. The process is transparent to the user. It doesn’t impact their day, because DPNE saves and transmits only the changed data. So, the impact to performance is really minimized.

Gardner: What about those times when folks are offline and are no longer connected, perhaps at a customer site or at that coffee shop? What's the process then?

Local repository

Cravens: That's a good question. DPNE has a local repository on each client and we established that to store active files. Whether you're connected to the network or not, data is captured and backed up locally to this local repository. This is important for accidental deletions or changes or even managing multiple versions of a file. You're able to go to the menu, click, and restore a file from a previous version at any point in time, without ever having to call IT.

Each client is then assigned to a network repository or data vault inside the network. That holds the backup files that are transferred from the client, and that data vault uses essentially any Windows file share.

The third element is a policy server. We talked about this a little before. The policy server allows IT staff to administer the overall system management from just a single web interface, and the centralized administration allows them to do file protection policies and set encryption policies, data vault policies, to their particular specifications.

It also provides centralized reporting. Data vault usage, agent status, agent deployments, and license issues can be tracked through the policy server.

The lack of open file protection in a lot of PC backup solutions is a huge gap that we can't ignore. Doing that in a way that doesn't overwhelm the system or create a lot of duplication is the way to go.



Gardner: I really like this idea of taking what's going on centrally in terms of a life-cycle approach to data management, storage, and whatnot. Now, extending that out to these edges, regardless of where they are, really cuts down on the duplication. We have seen instances in the past where so much money is wasted because of duplication of data. This allows for much more streamlined managed and governed approach.

Cravens: Absolutely. It's filling a gap that has been out there for a while in addressing things like open file protection. This is one thing for DPNE that's very important. Email is a really critical application for most organizations now.

The lack of open file protection in a lot of PC backup solutions is a huge gap that we can't ignore. Doing that in a way that doesn't overwhelm the system or create a lot of duplication is the way to go. It's really good for email PST files. DPNE ensures that PST files are saved and snapped, so we always have a copy of them. That works for not just Exchange, but also for Sage, big financial applications, or MySQL. Companies are using those to build home-grown applications. It works for pretty much any open file.

Gardner: Okay, let's go to John Ferguson and learn a little bit about how this has been applied in the real world. Tell me first about Roswell Park Cancer Institute, so we have a sense of the type organization that you are dealing with.

Finding the cure

Ferguson: Roswell Park Cancer Institute is the oldest cancer research center in the United States. We're focused on understanding, preventing, and eventually finding the cure for cancer. We're located in downtown Buffalo, NY. We have research, scientific, and educational facilities, and we also have a 125-bed hospital here.

Our researchers and scientists are frequently published in major studies, reported globally, for various types of cancers, and with related research studies. A number of breakthroughs in cancer prevention and treatment have been developed here. For example, the PSA test, which is used for detecting prostate cancer, was invented here.

Gardner: Tell me about the challenges you have. It seems with all that research, a great deal of data, a lot of people are moving around between your hospital and research facilities. What was the challenge that you've been grappling with in terms of the data?

Ferguson: Well, the real challenge as you mentioned, is that data is moving around. When you are dealing with researchers and scientists, they work at different schedules than the rest of us. When they are working, they are focused and that might be here, off campus, at home, whatever.

They've got their notebook PCs, their data is with them and they're running around and doing their work and finding their answers. With that data moving around and not always being on the network, the potential for the data loss of something that could be the cure for cancer is something that we take very seriously and very important to deal with.

With that data moving around and not always being on the network, the potential for the data loss of something that could be the cure for cancer is something that we take very seriously and very important to deal with.



Gardner: So, when you decided that this mobility issue was really something you couldn't ignore anymore, what was it that you looked for in a solution? What were some of the top requirements in terms of being able to solve this on the terms that you needed?

Ferguson: One of the big things was transparency to the user and being simple to use if they do need to use it. We were already in the process of making a decision to replace our existing overall backup solution with HP's Data Protector. So, it was just a natural thing to look at DPNE and it really fits the need terrifically.

There's total transparency to the user. Users don't even have to do anything. They're just going along, doing their work, and everything is going on in the background. And, if they need to use it, it's very intuitive and simple to use.

Gardner: Tell me about the implementation. How far in are you and to what degree do you expect to get to -- the number of seats, etc.?

Ferguson: In terms of the overall Data Protector implementation, we're probably about 40 percent complete. The DPNE implementation will immediately follow that.

A good test run

We anticipate initially just getting our IT staff using the application and giving it a good test run. Then we'll focus on key individuals throughout the organization, researchers, the scientists, the CEO, CIO, the people with all the nice initials after their name, and get them taken care of. We'll get a full rollout after that.

Gardner: It might be a little bit premature, as you're about 40 percent in, but do you have any sense of where this is going to take you on a total cost basis for the PCs and mobile notebooks themselves, or perhaps even applying that to the larger overall lifecycle data cost?

Ferguson: I don't think I can come up with actual cost numbers, but I do know that covering the exposure that we have for the possibility of losing critical data is enormous. You can't put a price tag on saving the potential possibility that someone who has a cure for cancer on their laptop says, "Oh, we lost it, sorry." It doesn’t work that way.

Gardner: I suppose another intangible, but nonetheless powerful benefit, is this element of trust that people will be more trusting of these devices. Therefore, they'll become more productive in the way they use them, when you have given them this sense of a backup and insurance policy, if you will, on their work.

When people are working on something, they don't think to “save it,” until they're actually done with it.



Ferguson: Absolutely. In the past, we've told people to follow best practices. Make sure that when you want to save your data, save it on the network drive. That, of course, requires them to be on campus or connected remotely. A lot of thought that has to go into that. When people are working on something, they don't think to “save it,” until they're actually done with it. And, DPNE provides us that versioning saving. You can get old versions of documents. You can keep track of them. That's the type of thing that's not really done, but it's really important, and they don't want to lose it.

Gardner: John Do you have folks that are in the legal department or proprietary, intellectual property minded folks who have some understanding of some of the benefits of this system?

Ferguson: We have plenty of people in our legal department, auditors, and all kinds of federal regulations that we have to adhere to. When it comes down to keeping track of data, keeping versions, and that type of thing, it's definitely important.

Gardner: Shari, as you're listening to John, is there anything that jumps out at you about how this is being implemented that you think highlights some of the values here?

Nothing more compelling

Cravens: John's comment about losing a laptop where you have a researcher working on a cure for cancer. I can't think of anything that's more compelling in terms of how important it is to save the data that's out there on notebooks and laptops.

I don't think it matters how big your organization is -- small, medium, large -- a lot of that data is very valuable, and most of it is running around outside the network now. Even for an average-sized organization, they could be spending hundreds of thousands of dollars in a year that they shouldn't have to in IT support and lost productivity.

Gardner: Very good. Let me take a quick peek at the future. Most people seem to agree that the amount of data is going to continue to explode for some time. Certainly, regulations and requirements for these legal issues don’t go away. John, is this problem something that from your perspective is going to be solved or is this sort of an ongoing rising tide that you have to fight to keep up with?

Ferguson: When it comes to federal regulations, it always is a rising tide, but we've got a good solution that we are implementing and I think it puts us ahead of the curve.

Gardner: Shari, how about you? Do you see a trend in the future in terms of data quantity, quality, and security issues that will give us a sense of where this problem is headed?

Information is continuing to explode and that's not going to stop. In addition to that, the workforce is only going to get more mobile.



Cravens: Absolutely. Information is continuing to explode and that's not going to stop. In addition to that, the workforce is only going to get more mobile. This problem definitely isn’t going to go away, and we need solutions that can address the flexibility and mobility of the workforce and be able to manage, as John mentioned, the increase in regulations.

Gardner: Of course, there's that old important issue about getting those costs under control at the same time.

Cravens: Absolutely. Going back to the possibility that there are organizations spending hundreds and thousands of dollars now that they don’t need to, with HP DPNE, they can actually avoid that.

Gardner: One thing I want to also hit on, Shari, is how you get started. If folks are interested in maybe doing a trial or trying out with this, what are some steps to get some hands-on experience?

Simple implementation

Cravens: HP Data Protector is very simple to implement. It snaps into your existing infrastructure. You don’t need any specialized hardware. All you need is a Windows machine for the policy server and some disk space for the data vault. You can download a 60-day trial version from hp.com. It's a full-featured version, and you can work with that.

If you have a highly complex multi-site organization, then you might want to employ the services of HP’s Backup and Recovery Fast Track Services for Data Protector. They can help get a more complex solution up and running quickly and reduce the impact on your IT staff just that much sooner.

Gardner: We've been looking at a use case for HP Data Protector Notebook Extension software and at how backup and recovery software have evolved. And, we have a better understanding of this transparency and reliability. I particularly liked that integration with the backend policies and governance across the lifecycle of data. I think that's really going to be the big cost saver over time.

I want to thank our guests who are joining us in our discussion. We are here with Shari Cravens, Product Marketing Manager for HP Data Protection. Thank you so much, Shari.

Cravens: Thank you.

Gardner: And John Ferguson. I appreciate your input. He is the Network Systems Specialist at Roswell Park Cancer Institute in Buffalo. Thank you, sir.

Ferguson: Thank you. It's been a pleasure.

Gardner: This is Dana Gardner principal analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Transcript of a sponsored BriefingsDirect podcast on how data protection products and services can protect against costly data loss with less hassle for users. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Gain more information on HP Data protection Notebook Extension. Follow on Twitter.
Access a Webcast with IDC's Laura DuBois on Avoiding Risk and Improving Productivity on PCs and Laptops.

You may also be interested in: