Tuesday, June 15, 2010

HP Data Protector, a Case Study on Scale and Completeness for Total Enterprise Data Backup and Recovery

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on backing up a growing volume of enterprise data using HP Data Protector.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Washington, DC. We're here the week of June 14, 2010 to explore some major enterprise software and solutions trends and innovations making news across HP's ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Software Universe Live Discussions.

Our topic for this conversation focuses on the challenges and progress in conducting massive and comprehensive backups of enterprise live data, applications, and systems. We'll take a look at how HP Data Protector is managing and safeguarding petabytes of storage per week across HP's next-generation data centers.

The case-study sheds light on how enterprises can consolidate their storage and backup efforts to improve response and recovery times ,while also reducing total costs.

To learn more about high-performance enterprise scale storage and reliable backup, please join me in welcoming Lowell Dale, a technical architect in HP's IT organization. Welcome to BriefingsDirect, Lowell.

Lowell Dale: Thank you, Dana.

Gardner: Lowell, tell me a little bit about the challenges that we're now facing. It seems that we have ever more storage and requirements around compliance and regulations, as well as the need to cut cost. Maybe you could just paint a picture for me of the environment that your storage and backup efforts are involved with.

Dale: One of the things that everyone is dealing with these days is pretty common and that's the growth of data. Although we have a lot of technologies out there that are evolving -- virtualization and the globalization effect with running business and commerce across the globe -- what we're dealing with on the backup and recovery side is an aggregate amount of data that's just growing year after year.

Some of the things that we're running into are the effects of consolidation. For example, we end up trying to backup databases that are getting larger and larger. Some of the applications and servers that consolidate will end up being more of a challenge for some of the services such as backup and recovery. It's pretty common across the industry.

In our environment, we're running about 93,000-95,000 backups per week with an aggregate data volume of about 4 petabytes of backup data and 53,000 run-time hours. That's about 17,000 servers worth of backup across 14 petabytes of storage.

Gardner: Tell me a bit about applications. Is this a comprehensive portfolio? Do you do triage and take some apps and not others? How do you manage what to do with them and when?

Slew of applications

Dale: It's pretty much every application that HP's business is run upon. It doesn’t matter if it's enterprise warehousing or data warehousing or if it's internal things like payroll or web-facing front-ends like hp.com. It's the whole slew of applications that we have to manage.

Gardner: Tell me what the majority of these applications consist of.

Dale: Some of the larger data warehouses we have are built upon SAP and Oracle. You've got SQL databases and Microsoft Exchange. There are all kinds of web front-ends, whether it’s with Microsoft, IIS, or any type of Apache. There are things like SharePoint Portal Services, of course, that have database back-ends that we back up as well. Those are just a few that come to mind.

Gardner: What are the major storage technologies that you are focusing on that you are directing at this fairly massive and distributed problem?

Dale: The storage technologies are managed across two different teams. We have a storage-focused team that manages the storage technologies. They're currently using HP Surestore XP Disk Array and EVA as well. We have our Fibre Channel networks in front of those. In the team that I work on, we're responsible for the backup and recovery of the data on that storage infrastructure.

We're using the Virtual Library Systems that HP manufactures as well as the Enterprise System Libraries (ESL). Those are two predominant storage technologies for getting data to the data protection pool.

Gardner: One of the other trends, I suppose, nowadays is that backup and recovery cycles are happening more frequently. Do you have a policy or a certain frequency that you are focused on, and is that changing?

As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.



Dale: That's an interesting question, because often times, you'll see some induced behavior. For example, we back up archive logs for databases, and often, we'll see a large increase in those. As the volume and transactional growth goes up, you’ll see the transactional log volume and the archive log volume backups increase, because there's only so much disk space that they can house those logs in.

You can say the same thing about any transactional type of application, whether it's messaging, which is Exchange with the database, with transactional logs, SQL, or Oracle.

So, we see an increase in backup frequency around logs to not only mitigate disk space constraints but to also mitigate our RTO, or RPO I should say, and how much data they can afford to lose if something should occur like logical corruption or something akin to that.

Gardner: Let's take a step back and focus on the historical lead-up to this current situation. It's clear that HP has had a lot of mergers and acquisitions over the past 10 years or so. That must have involved a lot of different systems and a lot of distribution of redundancy. How did you start working through that to get to a more comprehensive approach that you are now using?

Dale: Well, if I understand your question, you're talking about the effect of us taking on additional IT in consolidating, or are you talking about from product standpoint as well?

Gardner: No, mostly on your internal efforts. I know there's been a lot of product activities as well, but let's focus on how you manage your own systems first.

Simplify and reduce

Dale: One of the things that we have to do at the scope or the size that we get to manage is that we have to simplify and reduce the amount of infrastructure. It’s really the amount of choices and configurations that are going on in our environment. Obviously, you won't find the complete set or suite of HP products in the portfolio that we are managing internally. We have to minimize how many different products we have.

One of the first things we had to do was simplify, so that we could scale to the size and scope that we have to manage. You have to find and simplify configuration and architecture as much as possible, so that you can continue to grow out scale.

Gardner: Lowell, what were some of the major challenges that you faced with those older backup systems? Tell me a bit more about this consolidation journey?

Dale: That's a good question as well. Some of the new technologies that we're evolving, such as virtual tape libraries, was one of the things that we had to figure out. What was the use case scenario for virtual tape? It's not easy to switch from old technology to something new and go 100 percent at it. So we had to take a step-wise approach on how we adopted virtual tape library and what we used it for.

We first started with a minimal amount of use cases and little by little, we started learning what that was really good for. We’ve evolved the use case even more, so that in our next generation design that will move forward. That’s just one example.

We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies.



Gardner: And that virtual tape is to replace physical tape. Is that right?

Dale: Yes, really to supplement physical tape. We're still using physical tape for certain scenarios where we need the data mobility to move applications or enable the migration of applications and/or data between disparate geographies. We'll facilitate that in some cases.

Gardner: You mentioned a little earlier on the whole issue of virtualization. You're servicing quite a bit more of that across the board, not just with applications, but storage and networks even.

Tell me a bit more about the issues of virtualization and how that provided a challenge to you, as you moved to these more consolidated and comprehensive storage and backup approaches?

Dale: One of the things with virtualization is that we saw something that we did with storage and utility storage. We made it such that it was much cheaper than before and easy to bring up. It had the "If you build it, they will come" effect. So, one of the things that we may end up seeing is an increase in the number of operating systems (OSs) or virtual machines (VMs) that we see out there. That's the opposite of the consolidation effect, where you have, say, 10 one-terabyte databases consolidated into one to reduce the overhead.

Scheduling overhead

With VMs increasing and the use case for virtualization increasing, one of the challenges is trying to work with scheduling overhead tasks. It could be anywhere from a backup to indexing to virus scanning and whatnot, and trying to find out what the limitations and the bottlenecks are across the entire ecosystem to find out when to run certain overhead and not impact production.

That’s one of the things that’s evolving. We are not there yet, but obviously we have to figure out how to get the data to the data protection pool. With virtualization, it just makes it a little bit more interesting.

Gardner: Lowell, given that your target is moving -- as you say, you're a fast growing company and the data is exploding -- how do you roll out something that is comprehensive and consolidating, but at the same time your target is moving object in terms of scale and growth?

Dale: I talked previously about how we have to standardize and simplify the architecture and the configuration, so that when it comes time to build that out, we can do it in mass.

For example, quite a few years ago, it used to take us quite a while to bring up a backup infrastructure that would facilitate that service need. Nowadays, we can bring up a fairly large scope environment, like an entire data center, within a matter of months if not weeks. This is how long it would take us. The process from there moves towards how we facilitate setting up backup policies and schedules, and even that’s evolving.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off.



Right now, we're looking at ideas and ways to automate that, so that' when a server plugs in, basically it’ll configure itself. We're not there yet, but we are looking at that. Some of the things that we’ve improved upon are how we build out quickly and then turn around and set up the configurations, as that business demand is then turned around and converted into backup demand, storage demand, and network demand. We’ve improved quite a bit on that front.

Gardner: And what version of Data Protector are you using now, and what are some of the more interesting or impactful features that are part of this latest release?

Dale: Data Protector 6.11 is the current release that we are running and deploying in our next generation. Some of the features with that release that are very helpful to us have to do with checkpoint recoveries.

For example, if the backup or resource should fail, we have the ability with automation to go out and have it pick up where it left off. This has helped us in multifold ways. If you have a bunch of data that you need to get backed up, you don’t want to start over, because it’s going to impact the next minute or the next hour of demand.

Not only that, but it’s also helped us be able to keep our backup success rates up and our tickets down. Instead of bringing a ticket to light for somebody to go look at it, it will attempt a few times for a checkpoint recovery. After so many attempts, then we’ll bring light to the issue so that someone would have to look at.

Gardner: With this emphasis on automation over the manual, tell us about the impact that’s had on your labor issues, and if you’ve been able to take people off of these manual processes and move them into some, perhaps more productive efforts.

Raising service level

Dale: What it’s enabled us to do is really bring our service level up. Not only that, but we're able to focus on other things that we weren’t able to focus on before. So one of the things is there’s a successful backup.

Being able to bring that backup success rate up is key. Some of the things that we’ve done with architecture and the product -- just the different ways for doing process -- has helped with that backup success rate.

The other thing that it's helped us do is that we’ve got a team now, which we didn’t have before, that’s just focused on analytics, looking at events before they become incidents.

I’ll use an analogy of a car that’s about to break-down, and the check-engine light comes on. We're able to go and look at that prior to the car's breaking down. So, we're getting a little bit further ahead. We're going further upstream to detect issues, before they actually impact our backup success rate or SLAs. Those are just a couple of examples there.

We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion



Gardner: How many people does it take to run these petabytes of recovery and backup through your next-generation data center. Just give us a sense of the manpower.

Dale: On backup and recovery in the media management side, we’ve got about 25 people total spread between engineering and operational activities. Basically, their focus is on the backup and recovery of the media management side.

Gardner: Let’s look at some examples. Can you describe a time when you’ve needed to do very quick or even precise recovery, and how did this overall architectural approach and consolidation efforts help you on that?

Dale: We’ve had several cases where we had to recover data and go back to the data protection pool. That happens monthly in fact. We have a certain amount of rate of resource that we do per month. Some of those are to mitigate data loss from logical corruption or accidental deletion.

But, we also find the service being used to do database refreshes. So, we’ll have these large databases that they need to make a copy of from production. They end up getting copied over to development or test.

This current technology we are using, the current configuration, with the virtual tape libraries and the archive blogs has really enabled us to get the data backed up quickly and restored quickly. That’s been exemplified several times with either database copying or database recoveries, when those few type of events do occur.

Gardner: I should think these are some very big deals, when you can deliver the recovered data back to your constituents, to your users. That probably makes their day.

Dale: Oh yes, it does save the bacon at the end of the day.

Gardner: Perhaps you could outline, in your thinking, the top handful of important challenges that Data Protector addresses for you at HP IT. What are the really important paybacks that you're getting?

Object copy

Dale: I’ve mentioned checkpoint recovery. There are also some the things that we’ve been able to use with object copy that’s allowed us to balance capacity between our virtual tape libraries and our physical tape libraries. In our first generation design, we had enough capacity on the virtual libraries inside the whole, a subset of the total data.

Data Protector has a very powerful feature called object copy. That allowed us to maintain our retention of data across two different products or technologies. So, object copy was another one that was very powerful.

There are also a couple of things around the ability to do the integration backups. In the past, we were using some technology that was very expensive in terms of using of disk space on our XPs, and using split-mirror backups. Now, we're using the online integrations for Oracle or SQL and we're also getting ready to add SharePoint and Microsoft Exchange.

Now, we're able to do online backups of these databases. Some of them are upwards of 23 terabytes. We're able to do that without any additional disk space and we're able to back that up without taking down the environment or having any downtime. That’s another thing that’s been very helpful with Data Protector.

Gardner: Lowell, before we wrap up, let's take a look into the future. Where do you see the trends pushing this now? I think we could safely say that there's going to still be more data coming down the pike. Are there any trends around cloud computing, mobile business intelligence, warehousing efforts, or real-time analysis that will have an impact on some of these products and processes?

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other.



Dale: With some of the evolving technologies and some of the things around cloud computing, at the end of the day, we'll still need to mitigate downtime, data loss, logical corruption, or anything that would jeopardize that business asset.

With cloud computing, if we're using the current technology today with peak base backup, we have to get the data copied over to a data protection pool. There still would be the same approach of trying to get that data. If there is anything to keep up with these emerging technologies, for example, maybe we approach data protection a little bit differently and spread the load out, so that it’s somewhat transparent.

Some of the things we need to see and we may start seeing in the industry are load management and how loads from different types of technologies talk to each other. I mentioned virtualization earlier. Some of the tools with content-awareness and indexing has overhead associated with it.

I think you're going to start seeing these portfolio products talking to each other. They can schedule when to run their overhead function, so that they stay out of the way of production. It’s just a couple of challenges for us.

We're looking at new configurations and designs that consolidate our environment. So we're looking at reducing our environment from 50-75 percent just by redesigning our architecture and making available more resources that were tied up before. That's one goal that we're working on right now. We're deploying that design today.

And then, there's configuration and capacity management. This stuff is still evolving, so that we can manage the service level that we have today, keep that service level up, bring the capital down, and keep the people required to manage it down as well.

Gardner: Great. I'm afraid we're out of time. We've been focusing on the challenges and progress of conducting massive and comprehensive backups of enterprise-wide data and applications and systems. We've been joined by Lowell Dale, a technical architect in HP's IT organization. Thanks so much, Lowell.

Dale: Thank you, Dana.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 Conference in Washington DC. Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of HP-sponsored Software Universe live discussions. Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast from the HP Software Universe Conference in Washington, DC on the backing up a growing volume of enterprise data using HP Data Protector. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Delta Air Lines Improves Customer Self-Service Apps Quickly Using Quality Assurance Tools

Transcript of a BriefingsDirect podcast with Delta Air Lines development leaders on gaining visibility into application testing to improve customer self-service experience.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

Our customer case study today focuses on Delta Air Lines and the use of HP quality assurance products for requirements management as well as mapping the test cases and moving into full production. We are here with David Moses, Manager of Quality Assurance for Delta.com and its self service efforts. Thanks for joining us, David.

David Moses: Thank you, very much. Glad to be here.

Gardner: We're also here with John Bell, a Senior Test Engineer at Delta. Welcome John.

John Bell: Thank you.

Gardner: Tell me about the market drivers. What is the problem set when it comes to managing the development process around requirements and then quality and test out through your production? What are the problems that you're generally facing these days?

Moses: Generally, the airline industry, along with the lot of other industries I'm sure, is highly competitive. We have a very, very quick, fast-to-market type environment, where we've got to get products out to our customers. We have a lot of innovation that's being worked on in the industry and a lot of competing channels outside the airline industry that would also like to get at the same customer set. So, it's very important to be able to deliver the best products you can as quickly as possible. "Speed Wins" is our motto.

Gardner: What is it about the use of some of the quality assurance products that helps you pull off that dual trick of speed, but also reliability and high quality?

Moses: The one thing I really like about the HP Quality Center suite especially is that your entire software development cycle can live within that tool. Whenever you're using different tools to do different things, it becomes a little bit more difficult to get the data from one point to another. It becomes a little bit more difficult to pull reports and figure out where you can improve.

Data in one place

What you really want to do is get all your data in one place and Quality Center allows you to do that. We put our requirements in in the beginning. By having those in the system, we can then map to those with our test cases, after we build those in the testing phase.

Not only do we have the QA engineers working on it in Quality Center, we also have the business analysts working on it, whenever they're doing the requirements. That also helps the two groups work together a bit more closely.

Gardner: Do you have anything to add to that, John?

Bell: The one thing that's been very helpful is the way that the Quality Center tabs are set up. It allows us to follow a specific process, looking at the release level all the way down to the actual cycles, and that allows us to manage it.

It's very nice that Quality Center has it all tied into one unit. So, as we go through our processes, we're able to go from tab to tab and we know that all of that information is interconnected. We can ultimately trace a defect back to a specific cycle or a specific test case, all the way back to our requirement. So, the tool is very helpful in keeping all of the information in one area, while still maintaining the consistent process.

Gardner: Can you give us a sense of how much activity you process or how many applications there are -- the size of the workload you’ve got these days?

Bell: There is a lot. I look back to metrics we pulled for 2008. We were doing fewer than 70 projects. By 2009, after we had fully integrated Quality Center, we did over 129 projects. That also included a lot of extra work, which you may have heard about us doing related to a merger.

Gardner: With that increase in the number of applications that you're managing and dealing with, did you have any metrics in terms of the quality that you were able to manage, even though that volume increased so dramatically?

Moses: We were able to do that. That's one of the nice things. You can use your dashboard in Quality Center to pull those metrics up and see those reports. You can point out the projects that were your most troublesome children and look at the projects where you did really well.

Best-case scenario

You can go back and do a best-case scenario, and see what you did great and what you could improve. Having that view into it really helps. It’s also beneficial, whenever you have another project similar to one that was such an issue. You can have a heads up to say, "Okay, we need to treat this one differently this time."

Gardner: It’s the visibility to have repeatability when things go well, and, I suppose, visibility to avoid repeatability when things didn't go well.

Moses: Exactly.

Gardner: Let’s take a look at some of the innovation you've done. Tell me a bit about what you've worked with in terms of Quality Center in some of your own integration or tweaking?

Bell: One thing that we've been able to do with Quality Center is connect it with Quick Test Pro, and we do have Quality Center 10, as well as Quick Test Pro 10. We've been able to build our automation and store those in the Test Plan tab of Quality Center.

This has really been beneficial for us, when we go into our test labs and build our test set. We're able to take all of these automated pieces and combine them into test set. What this has allowed us to do is run all of our automation as one test set. We've been able to run those on a remote box. It's taken our regression test time from one person for five days, down to zero people and approximately an hour and 45 minutes.

Also, with the Test Lab tab, we're able to schedule these test sets to run during off hours. A lot of times our automation for things such as regression or sanity, can run on off hours. We schedule those to run at perhaps 6 o'clock in the morning. Then, when we come in at 8 o'clock in the morning, all of those tests would have already run.

That frees up our testers to be doing more of the manual functional testing and that allows us to know that we have complete coverage with the automation, as well as our sanity pieces. So, that's a unique way that we've used Quality Center to help manage that and to reduce our testing times by over 50 percent.

Gardner: Thank you, John. David, there have been some ways in which your larger goals as a business have been either improved upon or perhaps better aligned with the whole development process. I guess I'm looking for whether there is some payback here in terms of your larger business goals?

Moses: It definitely is. It goes back to speed to market with new functionality and making the customer's experience better. In all of our self-service products, it's very important that we test from the customers’ point of view.

We deliver those products that make it easier for them to use our services. That's one of the things that always sticks in my mind, when I'm at an airport, and I'm watching people use the kiosk. That's one of the things we do. We bring our people out to the airports and we watch our customers use our products, so we get that inside view of what's going on with them.

A lot on the line

I'll see people hesitantly reaching out to hit a button. Their hand may be shaking. It could be an elderly person. It could be a person with a lot on the line. Say it’s somebody taking their family on vacation. It's the only vacation they can afford to go on, and they’ve got a lot of investment into that flight to get there and also to get back home. Really there's a lot on the line for them.

A lot of people don’t know a lot about the airline industry and they don’t realize that it's okay if they hit the wrong button. It's really easy to start over. But, sometimes they would be literally shaking, when they reach out to hit the button. We want to make sure that they have a good comfort level. We want to make sure they have the best experience they could possibly have. And, the faster we can deliver products to them, that make that experience real for them, the better.

Gardner: I should think the whole notion of self service is usually important. It's important for the customer to be able to move through and do things their way, and I suppose there are some great cost savings and efficiencies on your end as well.

Dave, you could just highlight a little bit about how the whole notion of self service embedded into applications. It's important how some of the quality assurance tools and processes have helped there.

Moses: I go back to anytime you have to give up whenever you're having an issue with products, while you're online. You're on a website, and you have to call customer service. I think most people just sort of feel defeated at that point. People like to handle things themselves. You need a channel there for the customer to go to, if they need additional help.

So many clients and customers these days are so tech savvy. They know the industry they are in, and they know the tools they're working with, especially frequent flyers. I'd venture to say that most frequent flyers can hit the airport, check-in, get through security, and get to their plane really quickly. They just know their airports and they know everything they need to know about their flight, because this is where they live part of their lives.

You don't want to make them wait in line. You don't want to make them wait on a phone tree, when they make a phone call. You want them to be able to walk into the airport, hit a couple of buttons, get through security, and get to their gate.

By offering these types of products to the customers, you give them the best of both worlds. You give them a fast pass to check in. You give them a fast pass book. But, you can also give the less-experienced customer an easy-to-understand path to do what they need as well.

Gardner: And, to get those business benefits, those customer loyalty benefits, is really a function of good software development overall, isn't it?

Moses: Exactly. You have to give the customer the right tools that they want to get the job done for them.

Gardner: For other enterprises that are perhaps are going to be working towards a higher degree of quality in their software, but probably also interested in reducing the time to develop and time to value, do you have any suggestions, now that you’ve gone through this, that you might offer to them?

Interim approach

Bell: In using Quality Center, we've used an interim approach. Initially, we just used the Defects tab of Quality Center. Then, we slowly began to add the Requirements piece, and then Test Cases, and ultimately the Releases and Cycles.

One thing that we've found to be very beneficial with Quality Center is that it shows the development organization that this just isn't a QA tool that a QA team uses. What we've been able to do by bringing the requirements piece into it and by bringing the defects and other parts of it together, is bring the whole team on board to using a common tool.

In the past, a lot of people have always thought of Quality Centers as just a little tool that the QA people use in the corner and nobody else needs to be aware of. Now, we have our business analysts, project managers, and developers, as well as the QA team and even managers, because each person can get a different view of different information.

From Dashboard, your managers can look at your trends and what type of overall development lifecycle is coming through. Your project managers can be very involved in pulling the number of defects and see which ones are still outstanding and what the criticality of that is. The developers can be involved via entering information in on defects when those issues have been resolved?

We've found that Quality Center is actually a tool that has drawn together all of the teams. They're all using a common interface, and they all start to recognize the importance of tying all of this together, so that everyone can get a view as to what's going on throughout the whole lifecycle.

Moses: John hits on a really good point there. You have to realize the importance of it, and we did a long time ago. We've realized the importance of automating and we've realized the importance of having multiple groups using the same tool.

In all honesty, we were just miserable in our own history of trying to get those to work. You really take certain shots at it. For the past eight years, if we can go back that far, we've been using Quality Center tools for Test Director, just trying to get things automated, using the tools we had at the time.

The one thing that we never actually did was dedicate the resources. It's not just a tool. There are people there too. There are processes. There are concepts you're going to have to get in your head to get this to work, but you have to be willing to buy-in by having the people resources dedicated to building the test scripts. Then, you're not done. You've got to maintain them. That's where most people fall short and that's where we fell short for quite some time.

Once we were able to finally dedicate the people to the maintenance of these scripts to keep them active and running, that's where we got a win. If you look at a web site these days, it's following one of two models. You either have a release schedule, that’s a more static site, or you have a highly dynamic site that's always changing and always throwing out improvements.

We fit into that "Speed Wins," when we get the product out for the customers’ trading, and improve the experience as often as possible. So, we’re a highly dynamic site. We'll break up to 20 percent of all of our test scripts, all of our automated test scripts, every week. That's a lot of maintenance, even though we're using a lot of reusable code. You have to have those resources dedicated to keep that going.

Gardner: Well, I appreciate your time. We've been talking about the quality assurance process and the use of some HP tools. We've been learning about experiences from Delta Air Lines development executives. I want to thank our guests today, David Moses, Manager of Quality Assurance for Delta.com in the self-service function there. Thank you, David.

Moses: Thank you, very much.

Gardner: We've also been joined by John Bell, Senior Test Engineer there at Delta Air Lines. Thanks to you too, John.

Bell: It's been a pleasure.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast coming to you from the HP Software Universe 2010 conference in Washington, DC.

Look for other podcasts from this HP event on the hp.com website, as well as via the BriefingsDirect Network.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of Software Universe Live Discussions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast with Delta Air Lines development leaders on gaining visibility into application testing to improve customer self-service experience. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

McKesson Shows Bringing Testing Tools on the Road Improves Speed to Market and Customer Satisfaction

Transcript of a BriefingsDirect podcast from the HP Software Universe 2010 Conference in Washington, DC on field-testing software installations using HP Performance Center products.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP sponsored Software Universe Live discussions.

Our customer case-study today focuses on McKesson Corp., a provider of certified healthcare information technology, including electronic health records, medical billing, and claims management software. McKesson is a user of HP’s project-based performance testing products used to make sure that applications perform in the field as intended throughout their lifecycle.

To learn more about McKesson’s innovative use of quality assurance software, please join me in welcoming Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. Welcome to the show, Todd.

Todd Eaton: Thank you.

Gardner: Todd, tell me a little bit about what's going on in the market that is making the performance-based testing, particularly onsite, such an important issue for you.

Eaton: Well, looking at McKesson’s businesses, one of the things that we do is provide software for sale for various healthcare providers. With the current federal government regulations that are coming out and some of these newer initiatives that are planned by the federal government, these providers are looking for tools to help them do better healthcare throughout their enterprises.

With that in mind, they're looking to add functionality, they're looking to add systems, and they look to McKesson, as the leader in healthcare, to provide those solutions for them. With that in mind, our group works with the various R&D organizations within McKesson, to help them develop software for the needs of those customers.

Gardner: And what is it about performance-based testing that is so important now. We've certainly had lots of opportunity to trial things in labs and create testbeds. What is it about the real-world delivery that's important?

Eaton: It's one thing that we can test within McKesson. It's another thing when you test out at the customer site, and that's a main driver of this new innovation that we’re partnering up with HP.

When we build an application and sell that to our customers, they can take that application, bring it into their own ecosystem, into their own data center and install it onto their own hardware.

Controlled testing

The testing that we do in our labs is a little more controlled. We have access to HP and other vendors with their state-of-the-art equipment. We come up with our own set of standards, but when they go out to the site and get put in to those hospitals, we want to ensure that our applications act at the same speed and same performance at their site that we experience in our controlled environment. So, being able to test on their equipment is very important for us.

Gardner: And it's I suppose difficult for you to anticipate exactly what you're going to encounter, until you're actually in that data center?

Eaton: Exactly. Just knowing how many different healthcare providers there are out there, you could imagine all the different hardware platforms, different infrastructures, and the needs or infrastructure items that they may have in their data centers.

Gardner: This isn’t just a function of getting set up, but there's a whole life-cycle of updates, patches, improvements, and increased functionality across the application set. Is this something that you can do over a period of time?

Eaton: Yes, and another very important thing is using their data. The hospitals themselves will have copies of their production data sets that they keep control of. There are strict regulations. That kind of data cannot leave their premises. Being able to test using the large amount of data or the large volume of data that they will have onsite is very crucial to testing our applications.

Gardner: Todd, tell me the story behind gaining this capability of that performance-based testing onsite -- how did you approach it, how long has it been in the making, and maybe a little bit about what you’re encountering?

Eaton: When we started out, we had some discussion with some of the R&D groups internally about our performance testing. My group actually provides a performance-testing service. We go out to the various groups, and we’re doing the testing.

We always look to find out what we can do better. We’re always doing lesson learns and things like that and talking with these various groups. We found that, even though we did a very good job of doing performance testings internally, we were still finding defects and performance issues out at the site, when we brought that software out and installed it in the customer’s data center.

After further investigation, it became apparent to us that we weren’t able to replicate all those different environments in our data center. It’s just too big of a task.

The next logical thing to do was to take the testing capabilities that we had and bring it all out on the road. We have these different services teams that go out to install software. We could go along with them and bring the powerful tools that we use with HP into those data centers and do the exact same testing that we did, and make sure that our applications were running as expected on their environments.

Gardner: Getting it right the first time is always one of the most important things for any business activity. Any kind of failure along the way is always going to cost more and perhaps even jeopardize the relationship with the customer.

Speed to market

Eaton: Yeah, it jeopardizes the relationship with the customer, but one of the things that we also drive is speed to market. We want to make sure that our solutions get out there as fast as possible, so that we can help those providers and those healthcare entities in giving the best patient care that they can.

Gardner: What was the biggest hurdle in being able to, as you say, bring the testing capability out to the field. What were some of the hang-ups in order to accomplish that?

Eaton: Well, the tool that we use primarily within McKesson is Performance Center, and Performance Center is an enterprise-based application. It’s usually kept where we have multiple controllers, and we have multiple groups using those, but it resides within our network.

So, the biggest hurdle was how to take that powerful tool and bring it out to these sites? So, we went back to our HP rep, and said, "Here’s our challenge. This is what we’ve got. We don’t really see anything where you have an offering in that space. What can you do for us?"

Gardner: How far and wide have you been able to accomplish this? Are you doing it in terms of numbers of facilities, in what kind of organizations?

Eaton: Right now we have it across the board in multiple applications. McKesson develops numerous applications in the healthcare space, and we’ve used those across the board. Currently, we have two engagements going on simultaneously with two different hospitals, testing two different groups of applications, and even the application themselves.

I’ve got one site that’s using it for 26 different applications and other that’s using it for five. We’ve got two teams going out there, one from my group and one from one of the internal R&D groups that are assisting the customer and testing the applications on their equipment.

Gardner: From these experiences so far, are there metrics of success, paybacks, not only for you and McKesson, but also for the providers that you service?

Eaton: The first couple of times we did this, we found that we were able to reduce the performance defects dramatically. We’re talking something like 40-50 percent right off the bat. Some of the timing that we had experienced internally seemed to be fine, well within SLAs. But as soon as I got out to a site and onto different hardware configurations, it took some application tuning to get it down. We were finding 90 percent increases with our help of continual testing and performance tweaks.

Items like that are just so powerful, when you are bringing that out to the various customer, and can say, "If you engage us, and we can do this testing for you, we can make sure that those applications will run in the way that you want them to."

Gardner: How about for your development efficiency? Are you learning some lessons on the road that you wouldn’t have had before that you can now bring into the next rep. Is there a feedback loop of sorts?

Powerful feedback

Eaton: Yes. It’s a pretty powerful one back to our R&D groups, because getting back to that data scenario, the volume and types of data that the customers have can be unexpected. The way customers use systems, while it works perfectly fine, is not one of the use cases that is normally found in some applications, and you get different results.

So, finding them out in the field and then being able to bring those back to our R&D groups and say, "This is what we’re seeing out in the field and this is how people are using it," gives them a better insight and makes them able to modify their code to fit those use cases better.

Gardner: Todd, is there any advice that you would give to those considering doing this, that is to say, taking their performance testing out on the road, closer to the actual site where these applications are going to reside?

Eaton: The main one is to work with your HP rep on what they have available for this. We took a product that everybody is familiar with, LoadRunner, and tweaked it so it became portable. The HP reps know a lot more about how they packaged that up and what’s best for different customers based on their needs. Working with a rep would be a big help in trying to roll this out to various groups.

Gardner: Okay, great. We’ve been learning about how McKesson is bringing performance-based testing products out to their customers’ locations and gaining a feedback capability as well as reducing time to market and making the quality of those applications near 100 percent right from the start.

I want to thank our guest. We’ve been joined by Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. Thank you so much Todd.

Eaton: You’re welcome. Nice talking to you.

Gardner: And, thanks to our audience for joining us for this special BriefingsDirect podcast, coming to you from the HP Software Universe 2010 Conference in Washington, DC.

Look for other podcasts from this HP event on the hp.com website under HP Software Universe Live podcast, as well as through the BriefingsDirect Network.

I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this series of HP-sponsored Software Universe Live Discussions. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.


Transcript of a BriefingsDirect podcast from the HP Software Universe 2010 Conference in Washington, DC on field-testing software installations using HP Performance Center products. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Monday, June 14, 2010

Top Reasons and Paybacks for Adopting Cloud Computing Sooner Rather Than Later

Transcript of a BriefingsDirect podcast on how adopting cloud computing models can lead enterprises to gain business and technology benefits.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Welcome to a sponsored podcast discussion on identifying the top reasons and paybacks for adopting cloud computing.

Like any other big change affecting business and IT, if cloud, in its many forms, gains traction, then adopters will require a lot of rationales, incentives, and measurable returns to keep progressing successfully. But, just as the definition of cloud computing itself can elicit myriad responses, the same is true for why an organization should encourage cloud computing.

The major paybacks are not clearly agreed upon, for sure. Are the paybacks purely in economic terms? Is cloud a route to IT efficiency primarily? Are the business agility benefits paramount? Or, does cloud transform business and markets in ways not yet fully understood?

We'll seek a list of the top reasons why exploiting cloud computing models make sense, and why at least experimenting with cloud should be done sooner rather than later. We have assembled a panel of cloud experts to put some serious wood behind the arrow leading to the cloud.

Please join me now in welcoming Archie Reed, HP's Chief Technologist for Cloud Security and the author of several publications including The Definitive Guide to Identity Management and a new book, The Concise Guide to Cloud Computing. Welcome back to the show, Archie.

Archie Reed: Thanks very much, Dana.

Gardner: We're also here with Jim Reavis, executive director of the Cloud Security Alliance (CSA) and president of Reavis Consulting Group. Welcome back to you too, Jim.

Jim Reavis: Pleasure to be here, Dana.

Gardner: And we are also here with Dave Linthicum, Chief Technology Officer of Bick Group and also a prolific cloud blogger and author. Welcome back to you as well, Dave.

Dave Linthicum: Thanks for having me, Dana.

Gardner: Let me go first to you, Jim, and then to Archie. At the RSA Conference, the CSA and HP announced some findings around "Seven Deadly Sins" for cloud adoption. Tell us a little bit about those Seven Deadly Sins, some of the negative issues, before we delve into some of the positive, some of the perhaps heavenly delights of cloud, if you will.

Foregone conclusion

Reavis: Thanks, Dana. The reason we produce these threat listings and do everything we are doing at CSA is that we believe that adopting cloud is a foregone conclusion. We're going to be spending a lot of time on this webcast talking about the benefits. So, it’s to help people do it in the most appropriate and secure way.

You can find the full listing of the Seven Deadly Sins at cloudsecurityalliance.org website, under "Top Threats." I'm not going to list them all in detail. We found that, when you think about going to the cloud, it’s not just security issues that enterprises are concerned about, but rather compliance. A lot of the transparency issues on what your provider is doing becomes something that we want to think about and be really concerned about.

Data is something that we identified as a key threat issue. You want to know where your data is. You want to know how it’s being controlled. You want to prevent it from being leaked or lost completely. Obviously, that goes with any type of computing, but it's certainly something, as we move to this new model, that you want to understand and be concerned about.

We certainly don’t think what we produced was alarmist, but rather to help people adopt cloud.



Then, there's just a variety of things where we want to understand how bad guys will start using the cloud, what new specific issues there are, and when we have the shared infrastructure, how bad people might be able to get in in some way or another and use some form of privilege escalation through virtualization or other sorts of techniques to be able to move into areas where they aren’t allowed.

It’s definitely food for thought. It’s part of your whole risk-management process, when you think about how to take a certain business initiative and use a certain cloud system to accomplish that goal. That’s the whole point of it, and we've gotten pretty good feedback. We certainly don’t think what we produced was alarmist, but rather to help people adopt cloud.

Gardner: Archie Reed, a lot of companies that I've talked to are trying to do this cost-benefit analysis about cloud and what they should be doing. In order to understand that, you have to look at what you need to do to prevent the risks from getting out of hand, but you also need to know about what you get in return for doing it well.

Let’s look at this cost-benefit analysis. We have a good sense of some of the negatives, what you need to do, and some of the investments. What are some of the high-level potentials? What are the paybacks that would balance out some of those risks and investments?

Reed: Thanks, Dana. Just to reiterate what Jim said previously around the Seven Deadly Sins, in order to understand what the cost benefits are, what the impact to an organization is going to be, you have to be aware of the risk analysis you are going to undertake that feeds into a cost-benefit analysis.

I just want to make a couple of points about the top threats, as we lead into these things. First off, it was all about awareness or enlightenment. Given the tone of our discussion today, the key was, as Jim said, not to be alarmist, but to create awareness.

If you don’t understand what’s going on inside the cloud environment that you're using, be it public or private, or some hybrid of those things, then you can't really get the benefits that you're looking for, because you haven’t taken into account the overall risks that are associated with that.

The same risks

I
nterestingly, when we look at this list, if we received any criticism for it at all, it was that it presents the same risks that any large, outsourced business service might encounter. Fundamentally, you need to follow good security practices.

So, when we go into all of this discussion around what is the benefit, we need to do our standard risk analysis. There’s nothing too much that's new here, but what we do see is that when you get to the cloud and you're doing that assessment, it comes down to agility.

Agility, in this sense, has the dimensions of speed at scale. For businesses, that can be quite compelling in terms of economic return and business agility, which is another variation on the theme. But, we gain this through the attributes we ascribe to cloud -- things like instant on/off, huge scale, per-use billing, all the things we tried to achieve previously but finally seem to be able to get with a cloud-computing architectural model.

The risks may go down, if it’s a private environment.



If we're going to do the cost-benefit analysis, it does come down to the fact that, through that per-use billing, we're able to do this in a much more fine-grain manner and then compare to the risks that we are going to encounter as a result of using this type of environment. Again, that's regardless of whether it’s public or private. The risks may go down, if it’s a private environment.

Factoring all those things in together, there's not too much of a new model in how we try to achieve this justification and gain those benefits.

Gardner: Dave Linthicum, we've talked about this a bit in the past and one of things that was memorable in talking with you is that you seem to think that we shouldn’t look at cloud computing through a cost savings lens. It may not even be cheaper or more cost efficient, but you had other, more pressing reasons for moving into the cloud.

First, if I'm correct, explain your rationale on the cost issue and then also what you think are some of the top motivators?

Linthicum: The mistake that a lot of people make is that they go directly for the OPEX versus CAPEX cost. In other words, they're sick of buying waves and waves of servers for their data centers and sick of paying co-los and all those sorts of things. They really want to get into a "pay per drink" cost model in how they consume compute cycles, storage, and all the other things that are kind of innate to the data center.

One of the issues is that public cloud computing providers typically -- and sometimes private cloud computing infrastructure that you set up -- are going to be more expensive than a lot of existing infrastructures. That’s misunderstood out there, unless you are like me and for the last two years have done the analysis over and over again.

However, the notion of business agility, which I heard mentioned, is really where the money is made. It's the ability to scale up and scale down, the ability to allocate compute resources around business opportunities, and the ability to align the business to new markets quickly and efficiently, without doing waves and waves of software acquisitions, setups, installs, and all the risks around doing that. That's really where the core benefit is.

If you look at that and you look at the strategic value of agility within your enterprise, it’s always different. In other words, your value of agility is going to vary greatly between a high tech company, a finance company, and a manufacturing company. You can come up with the business benefit and the reason for moving into cloud computing, and people have a tendency not to think that way.

Innate risks

The point I already made -- and I agree with the guests -- is that you have to weigh that benefit in line with the innate risks in moving to these platforms. Whether or not you are moving from on-premises to off-premises, on-premies to cloud, or traditional on-premises to private cloud computing, there’s always risk involved in terms of how you do security, governance, latency, and those things.

Once you factor those things in and you understand what the value drivers are in both OPEX and CAPEX cost and the trade-offs there, as well as business agility, and weigh in the risk, then you have your equation, and it comes down to a business decision. Nine times out of ten, the cloud computing provider is going to provide a more strategic IT value than traditional computing platforms.

Gardner: Going back to you, Jim, when we think about the benefits of cloud in general, it seems that most people gravitate to this as a way in which we can recast IT processes and functions. But, in a lot of ways, I think there’s just as much interest around using the cloud as a way of reaching audiences, providing services, linking up partners in an ecosystem or process marketplace in ways that hadn’t been possible before.

Do you think it’s a good idea for us to not just think about cloud as a benefit to efficiency and transformation at the IT level, but that in gaining cloud expertise, there's the opportunity to do things vis-à-vis supplying your customers, finding your customers, and even in joining with suppliers in a new way?

Reavis: I'd agree with that, and it echoes a little bit of what Dave has said. When you think about economics, what’s the core of economics? It's supply and demand. Cloud gives you that ability to more efficiently serve your customers. It becomes a customer-service issue, where you can provide a supply of whatever your service is that really fits with their demand.

Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.



Ten years ago I started a little minor success in the Internet dot-com days. It was called Securityportal.com. You all remember something called the "Slashdot effect," where a story would get posted on Slashdot and it would basically take your business out. You would have an outage, because so much traffic would go your way.

We would, on the one hand, love those sorts of things, and we would live in fear of when that would happen, when we would get recognition, because we didn’t have cloud-based models for servicing our customers. So, when good things would happen, it would sometimes be a bad thing for us.

I had a chance to spend a lot of time with an online gaming company, and the way they've been able to scale up would only be possible in the cloud. Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.

So, yeah, it provides us this whole new platform. I've maintained all along that we're not just going to migrate IT into the cloud, but we're going to reinvent new businesses, new business processes, and new ways of having an intermediary relationship with other suppliers and our customers as well. So it’s going to be very, very transformational.

Gardner: Similar question to you, Archie. When HP looks at the potential for cloud in its own right as a company, I should think that there is a lot of interest and efficiency for delivering services and providing a cloud capability for that. You've already got a lot of software-as-a-service (SaaS)-based services for application lifecycle management, and test and dev, and so forth. How do you see the difference between cloud as it affects IT and then cloud as it affects business?

Outcomes are core

Reed: At HP, when we talk to customers and even try to evaluate internally, we talk about this thing called business outcomes being core to how IT and business align. Whether they're small companies or large companies, it's providing services that support the business outcomes and understanding that ultimately you want to deliver.

In business terms, it's more processing of loan requests and financial transactions. Then, if that’s the measure that people are looking at what the business outcomes need to be, then IT can align with that and they become the service provider for that capability.

We've talked to a lot of customers, particularly in the financial industry, for example, where IT wasn’t measured in how they cut costs or how much staff they had. They were measured in incremental improvements on how many advances could be made in delivering more business capability.

In that example, one particular business metric was, "We can process more loans in a day, when necessary." The way they achieved that was by re-architecting things in a more cloud or service-centric way, wherein they could essentially ramp up, on what they called a private cloud, the ability to process things much more quickly.

Now, many in IT realize -- perhaps not enough, but we're seeing the change -- that they need to make this toward the service oriented architecture (SOA) approach and delivery, such that they are becoming experts in brokering the right solution to deliver the most significant business outcomes.

That becomes the latency that drives the lateness of the business process changes that need to occur within the enterprise.



The source of those services is less about how much hardware and software you need to buy and integrate and all that sort of thing, and more about the most economical and secure way that they can deliver the majority of desired outcomes. You don’t just want to build one service to provide a capability. You want to build an environment and an architecture that achieves the bulk of the desired outcomes. Does that make sense?

Gardner: Sure. Dave Linthicum, we talked about agility, let’s see if we can unpack that a little bit and get a little bit more detail. That’s kind of a general umbrella topic or a moniker.

When we think about business process, if you're focused at the business process level, and I think that’s what Archie was alluding to, rather than the supporting infrastructure or the applications, if we start composing business processes from services, rather than discrete applications, it seems to me we gain an opportunity to be responsive. That is to say, a business process can be examined and then perhaps some data analysis can be applied. Then, we can ask how do we do that better.

Does cloud computing allow us to then adjust a business process or even come up with innovations built upon existing processes in ways that traditional IT simply can’t or just can’t within the necessary time frame?

Linthicum: Yes. The latency that people are running into in traditional IT is not really aligning the business processes, because usually they have the ability to do that in one way or form, either in composites or a true business process layer, which already exists. It’s the ability to stand up the services that they need in terms of storage, compute, different things like risk analytics in the financial market, and how all those things basically tie together. That becomes the latency that drives the lateness of the business process changes that need to occur within the enterprise.

Additional capabilities

Cloud computing will provide us with some additional capabilities. It's not necessarily nirvana, but you can get at compute and you can get at even some of these pretty big services. For example, the Predictive API that Google just announced at Google I/O recently is an amazing piece of data-mining stuff that you can get for free, for now.

The ability to tie that into your existing processes and perhaps make some predictions in terms of inventory control things, means you could save potentially a million dollars a month, supporting just-in-time inventory processes within your enterprise. Those sorts of things really need to come into the mix in order to provide the additional value.

Sometimes we can drive processes out of the cloud, but I think processes are really going to be driven on-premises and they are going to include cloud resources. The ability to on-board those cloud resources is needed to support the changes in the processes and is really going to be the value of cloud computing.

That the area that’s probably the most exciting thing. I just came back from Gluecon in Denver. That is, in a sense, a cloud developers’ conference, and they're all talking about application programming interfaces (APIs) and building the next infrastructure.

When those things come online, become available, and we don’t have to build those things in-house, we can actually leverage them into a "pay per drink" basis through some kind of provider, buying those into our processes. We'll perhaps have thousands of APIs that exist all over the place, and perhaps even not even local data within these APIs.

That’s where the value of cloud computing is going to appear, and we haven’t seen anything yet. There are huge amounts of value being built right now.



They just produce behavior, and we bring them together to form these core business processes. More importantly, we bring them together to recreate these core business processes around new needs of the business.

Reed: It's the same for me. I was also at Gluecon this week, and there were several threads going on. Certainly the API thread was fascinating in terms of the sheer number of APIs that were being created and the various approaches being used in those things.

At the same time, one of the other tracks was on a whole set of concerns around the legal and security risks associated with piecing all this together. As it was the developers’ conference, the legal thread was less attended than the API thread. But, there is obvious concern about how all these things piece together, how we put the controls in place, and where we get those services from.

I definitely agree with Dave that some of the core processes, especially for larger and more security-sensitive organizations that consider their core IT to be their business processes, are going to be maintained internal to the organization. Some may be willing to put them out, but in majority of cases, we find people want to retain the IT internally.

But being able to reach out through those APIs in a safe and secure way, controlled way, to get data, analysis, and capabilities from within the cloud is definitely where we are headed. That Google analytics stuff is one example.

Internal or external

We've already seen in terms of analysis tools, the GIS stuff, geographical information, where people are just putting maps up and overlaying stuff. The data may be internal to them, but the capability of drawing a map and getting the geographical data comes from outside, and that’s created incredible types of what we call mashups, such that we expect and have seen in some cases.

Businesses are now doing their own mashups and they only get there by understanding how all these APIs, these security tenants, these legal requirements, come together. In some cases, they're ignoring those for expediency today, but ultimately the management of those things is going to be key here.

Linthicum: Just a short comment on that. One of the things that was not a message that was well received at Gluecon, being a bunch of developers, was that you need to do your stuff in the context of a good security strategy and a good governance strategy. So, how you are going to leverage these systems and policies and usage you put around it? That really becomes the core problem to solve before you go off and make this happen.

I don't know if you saw my keynote presentation I did the first day of the conference, but I went into a lot of those things. When I talked to some of the attendees, I noticed that really wasn’t well understood or even well received.

That’s a tad scary, because they're driving out in the market, creating and leveraging these APIs. In many instances, they're ungoverned. They're insecure. We don’t know exactly what they're doing, and they actually can create some vulnerabilities, which will open the risk that costs way more than any kind of benefits we're getting from cloud computing.

I think it requires them to translate their governance concepts and their controls into a new environment. It's going to take some real thinking to do that.



Gardner: Jim Reavis, let’s look into governance a bit. When companies start exploring more business process and agility efficiencies around cloud, they get exposed in ways that they wouldn’t if they were locked down inside their four walls.

But, becoming exposed, sharing data, exploring and using APIs from other parties, doesn’t this, in a sense, force these companies to adopt better methods and policies and start thinking about things that they probably should have been doing anyway? The question is, does cloud, by its nature, force organizations to become better at things like governance, policies, and best practices?

Reavis: I think it requires them to translate their governance concepts and their controls into a new environment. It's going to take some real thinking to do that.

I was one of the three, I guess, who didn’t go to Gluecon. So, thanks Dave and Archie for not inviting me. I guess it's because they're authors and I just read cartoons all the time, but I think the points there are very well made.

We're going to see the market provide the SOA governance and brokering tools that allow you to control a lot of these things and give the customer the ability to put in XAML, for example, and create some policies that they can embed and have some brokering involved, so that when the developers are out trying to create these mashups with a variety of different APIs, they can insert some sort of policy governance and have that look like another SOA-type service.

Frameworks and tools

We're not trying to dictate to the developers completely how they develop these new applications, but we are giving them some frameworks and tools that they can embed in the way they understand things, in the way they like to do business.

I want to quickly mention, though, that we've got a huge history behind us that tells us that internal networks are not locked down and secured. Having data on 100,000 machines, laptops, and every place else that has no controls over it, is a pretty perilous place to be.

Now, we understand that we're moving to a new platform. Let’s do our best to control that, but let’s try and deflate little bit that traditional IT is more secure than cloud. I'm really not ready to say that.

Reed: There are a couple of points I want to make, so that we're sure we're not just hand waving and all that. I think the incentives, the risks, and all those things change dependent on the type of business we're looking at.

Ultimately, it does require that you shore up a lot of your security and governance processes within organizations that probably don’t do security and governance processes as well as they think they do.



Certainly, when we talk to smaller organizations and mid-sized organizations as well, they're looking for the edge that they can gain in terms of cost and support and, in most cases, more security. In this case, they look for broader back-office solutions than perhaps some of the larger organizations, things such as email, account management, HR, and so forth, as well as front-end stuff, basic web hosting and more advanced versions of that.

We've implemented things like Microsoft Business Productivity Online Suite (BPOS) for many customers, especially in the mid range. They do find better support, better up time, better cost controls, and to Jim’s point, more security than they are able to provide for themselves.

When we get to talk to larger organizations, some are looking for this. We know, even in the financial industry, which you might consider to be one of the most security paranoid type environments there are outside of the three-letter agencies, they find that kind of thing appealing as well. Some of those have actually gone to use Salesforce.com for some of their services.

But, they're generally more concerned with the security stuff and they often find specific capabilities more appealing in a service model, such as data processing, data analysis, data retrieval, functional analysis, and things like that. The mashups are definitely more popular as a type of model or the service-oriented nature is more popular model with larger organizations that we talk to.

Gardner: What do you think Dave Linthicum? Is there an under-appreciated value to cloud in that, in moving to cloud models, you have to adopt the right processes around security, governance, and other risk mitigating activities that makes you a stronger, better company overall. That is to say, cloud is like New York -- if you can make it there, you can make it anywhere?

Linthicum: Ultimately, it does require that you shore up a lot of your security and governance processes within organizations that probably don’t do security and governance processes as well as they think they do.

Huge exposures

In some of the audits that I do, I often find huge exposures in how they do the on-prem systems. As they're moving into cloud, they push back on the security aspects of it all the time, and people are walking off on a daily basis with laptops full of customer data, critical data, and their IT. They just don’t understand it, because they don’t have the audits, the best practices, and the security mechanisms around that.

Moving into cloud is going to make people think in a very healthy, paranoid state. In other words, they are going to think twice about what information goes out there, how that information is secured and modeled, what APIs they are leveraging, and service level agreements (SLAs). They're going to consider encryption and identity management systems that they haven’t done in the past.

In most of the instances that I am seeing deploying cloud computing systems, they are as secure, if not more secure, than the existing on-premise systems. I would trust those cloud computing systems more than I would the existing on-premise systems.

That comes with some work, some discipline, some governance, some security, and a lot of things that we just haven’t thought about a lot, or haven’t thought about enough with the traditional on-premise systems. So, that’s going to be a side benefit. In two years, we're going to have better security and better understanding of security because of cloud.

Gardner: So, as we're now looking for even more benefits, paybacks, and improvements to your overall business by being a cloud adopter, how about at the competitive level? It seems to me that there are benefits to first movers.

In terms of first mover, late to market, or fast follower, there’s always a potential risk and benefit to any of those things.



It's been established by some of the best management consultants and business schools in the world that being the first to a market gives you very powerful benefits. Does cloud offer the opportunity for those who are willing to do the work and be aggressive and innovative an opportunity to enter markets in new ways?

One example is Apple computer. Apple has been aggressive. They don’t talk about cloud, but when you look at MobileMe, iTunes downloads, and the App Store, these to me are cloud-based services that have allowed Apple to grow mightily in the past few years, not just based on their devices, but based on their use of cloud.

So, there’s a first-mover advantage. Do you all agree -- and we will go around the panel -- that there’s a competitive benefit, at least for the foreseeable future, in your own markets, as enterprises have exploited cloud as a competitive cudgel. How about that, Archie?

Reed: In terms of first-mover, late-to-market, or fast-follower, there’s always a potential risk and benefit to any of those things. I agree that perhaps Apple has benefited, but I wouldn’t call them first movers in this space. I would say that they have been fast followers.

By that, I mean that even if you look at iTunes or the iPod itself, those things came after existing services already were in place. What they were able to do, if we take that as an example, was tie those together into an ecosystem that basically created their momentum to move forward.

Scaling really fast

The reality is not that the advantage is being able to be the first mover in cloud computing, but the fact that cloud allows you to scale and go big really fast. It allows you to sit in the fast-follower position and gain just as much as any first mover, because the gap between seeing a business opportunity and being able to deliver on that requirement or business opportunity is so much less than what it was previously.

You don’t have to ramp up huge amounts of services that take months. You can scale up in a matter of hours or days. As long as the wave isn’t so huge, and it rarely ever is, you can always get into that market space using this type of model.

Gardner: I'd like to pick up on one of the points you made about being able to establish an ecosystem. If you're exploiting cloud effectively, does that give you an advantage in how you can carve out an ecosystem, become a hub, and therefore be in a very profitable position within that ecosystem?

Reed: I'll take a quick stab at that. I think there's going to be a window for a number of years where that is the case. There will be businesses that are willing and able and can manage cloud-type environments to their benefit. But, eventually, the gaps become so small and the availability of these services online becomes so ubiquitous that I'm not sure how long this window goes for.

I don’t want to say that, in a few years, everybody will be able to deliver the same thing just as quickly. But for the moment, I think there’s a few forward thinking organizations that will be able to achieve that to great success.

There are going to be a lot of new capabilities that will only be accessible in this platform, and they're going to come a lot quicker.



Gardner: Jim Reavis, same to you. What about the competitive benefits that businesses should consider when evaluating cloud in terms of that cost benefit analysis?

Reavis: Businesses are so dependent on technology now and into the future, and we always try to stay innovative and competitive. If you just look at this from a developer standpoint, you don’t see a lot of new applications for the Commodore 64 anymore.

The organizations that are developing what they think is state-of-the-art, but it’s not cloud, are going to be struggling, because all of the neat, interesting new developments. It’s hard to even put your head around all of implications of compute-as-a-utility and all the innovation we are going to see, but we know it’s going to be on that platform.

If you think of this as the new development platform, then yeah, it’s going to be a real competitive issue. There are going to be a lot of new capabilities that will only be accessible in this platform, and they're going to come a lot quicker.

Five years from now

So, in terms of the first movers and the environment now, it’s going to look very different. Anybody who carved out some space right now and some lead in the market in cloud shouldn't feel too comfortable about their position, because there are companies we don't even know about at this point, that are going to be fairly pervasive and have a lot to say about IT five years from now.

Reed: I just want to make a point there, Jim. You can actually get a Commodore 64 emulator for the iPhone. So, there may be some new stuff coming up. I'm not sure, but it is possible.

Gardner: Yeah, there is the long tail in reverse. It’s backward-compatibility from the cloud.

Dave Linthicum, same question to you, the competitive benefits of being aggressive in cloud computing at some of the highest business issue levels.

Linthicum: We already talked about the business agility aspect of it, but ultimately, even as these younger companies who are leveraging more cloud than a lot of the older companies out there start to grow up, they are going to find that their IT CAPEX costs are, in many instances, nonexistent.

They are going to have some on-premise systems, but they are used to putting things in the cloud. They are Salesforce.com adopters early on. They're using Amazon now. They've figured out security and governance and ultimately they are going to have these very agile business systems that are able to run rings around their competition.

Some of the things we always talk about around enterprise architecture are going to kill the company, because they can’t do the acquisitions and they can’t move into market spaces.



I don’t think we're going to see this anytime soon, but I definitely think that by 2015 or 2016, you're going to see some businesses suffering from IT bloat. They're very static, monolithic systems, very difficult to change, and very fragile. Some of the things we always talk about around enterprise architecture are going to kill the company, because they can’t do the acquisitions and they can’t move into market spaces.

By the way, their new competitors that came out of nowhere get cloud computing because they've used it from the get-go. They're going to be able to leverage that as the strategic value that’s going to allow them to dominate the market. We're seeing some of this today in some of the smaller spaces, but it’s not very pronounced.

But, it’s going to be very pronounced to the point that business journals are going to talk about it, and a lot of companies are going to go out, because some of the folks are able to leverage technology for strategic IT advantage to beat them into the ground. Look at Wal-Mart. They leveraged IT for a huge strategic advantage to beat their competitors into the ground to lower their prices. We're going to see that a hundred times over in five years.

Reed: I'd agree. I can give you an example, Dana. I spoke to a very small group of individuals, fewer than 50. They're designers and architects, and they've come together to form this company. Their claim was that they didn’t need any IT anywhere, because they were using cloud services for everything.

Even the provisioning system, the controls about who had access to what, was all done in the cloud. All they needed was their big old Macs, the 27-inch Macs, and their huge HP screens. As long as they could get online, they were in business.

This small company's claim, when I was talking to them, was that they had just beaten out the largest established architectural firm in Ireland for a bid in Dublin. They had done that by being able to work round the clock, online, at all times, and deliver it to the customer in a much shorter time than anyone else was able to. They did it all through cloud services.

So, it’s quite compelling to see small businesses compete with the larger businesses, and unless big businesses understand what’s going on, we're going to see a few start to lose business in this sense.

Gardner: Well, I'm afraid we'll have to leave it there. Suffice it to say that we've clearly identified in the market, over the past several years, some significant hurdles and risks to cloud computing. But, some of these benefits also sound extremely compelling and almost not an option, when you consider the competitive issues. That cost-benefit analysis can easily come down on the side of a must-do, even if the risks are substantial.

We've been talking about identifying some of the top reasons and paybacks for adopting cloud computing and why you should perhaps do those sooner rather than later.

I want to thank our panel. We've been joined by Archie Reed, HP’s Chief Technologist for Cloud Security and the author of several publications including "The Definitive Guide to Identity Management" and "The Concise Guide to Cloud Computing." Thank you so much, Archie.

Reed: Thank you.

Gardner: We've also been joined by Jim Reavis, executive director, Cloud Security Alliance and president of Reavis Consulting Group. Thank you Jim.

Reavis: Thanks, Dana.

Gardner: Lastly, I also want to thank Dave Linthicum, CTO of Bick Group and a prolific cloud blogger, podcaster, and you said that you did your 100th cloud podcast recently Dave?

Linthicum: Just filed a 100th podcast, after two years.

Gardner: Congratulations. And also the author of several notable books. Thanks to you.

This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how adopting cloud computing models can lead enterprises to gain business and technology benefits. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.

You may also be interested in: