Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.
For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.
We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
Here today we will specifically explore building quick data center project wins, leveraging project tracking and scorecards, as well as developing a common roadmap for both facilities and IT infrastructure. You don’t need to go very far in IT to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.
One way to keep the interest high and those operating and investment budgets in place is to show fast results and then use that to prime the pump for even more improvement and even more funding with perhaps even growing budgets.
With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. Welcome to you all.
Let's go first to Duncan Campbell on communicating an ongoing stream of positive results, why that’s important and necessary to set the stage for an ongoing virtuous adoption cycle for data center transformation and converged infrastructure projects.
Duncan Campbell: You bet, Dana. We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.
Breeds confidence
Number one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.
The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.
Gardner: I suppose it's not only getting these quick wins, but effectively communicating them well. People really need to know about them.
Campbell: Right. So this is one of the things that some of the real leaders in IT realize. It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.
One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.
We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.
We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.
Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, I suppose we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.
Campbell: That’s exactly right. A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.
What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.
Gardner: I suppose one last positive benefit here might be that, as enterprises adopt more of what we call social networking and social media, the ability for the rank and file, those users involved with these products and services, can start to be your best word-of-mouth marketing internally.
TCO savings
Campbell: That’s right. A good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.
When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.
Gardner: Thank you. Next, we're going to go to Randy Lawton and hear some more about why tracking scorecards and managing expectations through proven data and metrics also contributes to a successful ongoing DCT activity.
Randy, why is it so important to know your baseline tracks and then measure them each and every step along the way?
Randy Lawton: Thank you, Dana. Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.
So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.
It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.
During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.
From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.
Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.
During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.
Gardner: As we heard from Duncan about why it’s important to demonstrate wins, I sense that organizations are really data driven now more than ever. It seems important to have actual metrics in place and be able to prove your work each step of the way.
Complex engagements
Lawton: That’s very true. In these complex engagements, it’s normally some time before there are quick-win type of achievements that are really notable.
For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.
All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.
If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.
If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.
Gardner: Randy, I know that many organizations are diligent about the scorecarding across all sorts of different business activities and metrics. Have you noticed in some of these engagements that these readouts and feedback in the IT and data center transformation activities are somehow joined with other business metrics? Is there an executive scorecard level that these feed into to give more of a holistic overview? Is this something that works in tandem with other scorecarding activities in a typical corporation?
Lawton: It absolutely is, Dana. Often in these kind of programs there are business activities and projects that are going on within the business units. There are application projects that work into the program and then there are the infrastructure components that all have to be fit together at some level.
What we typically see is that the business will be reporting its set of metrics, each of the application areas will be reporting their metrics, and it’s typically from the infrastructure perspective where we pull together all of the application and infrastructure activities and sometimes the business metrics as well.
We've seen multiple examples with our customers where they are either all consolidated into executive scorecards that come out of the reporting from the infrastructure portion of the program that rolls it all together, or that the business may be running separate metrics and then application teams and infrastructure are running the IT level metrics that all get rolled together into some consolidated reporting on some level.
Gardner: And that, of course, ensures that IT isn’t the odd man out, when it comes to being on time and in alignment with these other priorities. That sounds like a very nice addition to the way things may have been done five or 10 years ago.
Lawton: Absolutely.
Gardner: Any examples, Randy, either with organizations you could name, or use cases where you could describe, where the use of this ongoing baselining, tracking, measuring, and delivering metrics facilitates some benefits? Any stories that you can share?
Cloning applications
Lawton: A very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.
Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.
What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.
The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.
It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.
For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.
I think that HP was very helpful in working with the customer and bringing that perspective into the program very early on, because there had been a failed attempt to operate this program prior to that, and with our assistance and with developing these tools and capabilities, we were able to successfully achieve the objectives of that program.Gardner: One thing that jumped out at me there was your use of the words real time. How important is it to capture this data and adjust it and update it in real-time, where there’s not a lot of latency? How has that become so important?
Lawton: In this particular program, because there were so many activities taking place in parallel by representatives from all over the world across these nine different companies, the real-time capture and update of all of the data and information that went into the scorecarding was absolutely essential.
In some of the other programs we've operated, there was not such a compressed time frame that required real-time metrics, but we, at minimum, often required daily updates to the metrics. So each program, the strategies that drive that program, and some of the time constraints will drive what the need is for the real-time update.
We often can provide the capabilities for the real-time updates to come from all stakeholders in the program, so that the tools can capture the data, as long as the stakeholders are providing the updates on a real-time basis.
Gardner: So as is often the case, good information in, good results back.
Lawton: Absolutely.
Organizing infrastructure
Gardner: Let’s move now to our third panelist today. We're going to hear about why organizing facilities and infrastructure planning in conjunction in relationship to one another is so important.
Now to Larry Hinman. Larry, let’s go historical for a second. Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?
Larry Hinman: Hi, Dana. If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.
One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.
If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.
One of the things that we spotted a few years ago was that when companies do this, the risk of over provisioning or under provisioning is very high. We tried to figure out a way to back this up a few notches.
What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.
How can we remedy this problem and how can we bring some structure to this and bring some, what I would call, sanity to the whole equation, to be able to have something predictable over time? What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.
So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.
Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.
One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.
Getting synchronized
Look at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.
What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.
That is kind of the secret sauce here, and the way we have developed our framework was breaking this complex program into these four key areas. I'm certainly not trying to say this is an easy thing to do. In a lot of companies, it’s culture changes. It’s a threat to the way the very organization is organized from an IT and a facilities perspective. The risk and recovery teams and the management teams all have to start working together collaboratively and collectively to be able to start to glue this together.
Gardner: You mentioned earlier the issues around energy and the ongoing importance around the cost structure for that. I suppose it's not just fitting these together, but making them fit for purpose. That is to say, IT and facilities on an ongoing basis.
You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time.
It’s not really something that you do and sit still, as would have been the case several years ago, or in the past generation of computing. This is something that's dynamic. So how do you allow a fit-for-purpose goal with data-center facilities to be something that you can maintain over time, even as your requirements change?
Hinman: You just hit a very important point. One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.
The data is, as you said, gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.
One thing that we have learned is to not only have this deliberate framework and break it into these four simplistic areas, where we can manage all of this, but to redevelop and re-hone our tools and our focus a little bit, so that we could use this as a dynamic ongoing process to get the client pointing the right direction. Build a data center framework that truly is right size, integrated, aligned, and all that stuff. But then, to have something that was very dynamic that they could manage over time.
That's what we've done. We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.
So now, you do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.
EcoPOD concept
Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?
Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.
What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.
With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.
It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.
We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.
We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.
Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. There's the need to support more of a mobile tier set of devices, and the fact that we're looking for of course even more savings on those long-term energy and operating costs.
Back to you, Randy Lawton. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?
Lawton: Yes, Dana. In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.
Greater complexity
Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.
The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.
Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.
Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.
Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.
You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.
It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.
It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.
We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.
So it all glues together. It's extremely important that customers understand that this really is a process they have to do.
Gardner: Very good. You've been listening to a sponsored BriefingsDirect podcast discussion on how quick and proven ways to attain productivity can significantly improve IT operations and efficiency.
This is the second in an ongoing series of podcasts on data center transformation best practices and is presented in conjunction with a complementary video series.
I'd like to thank our guests, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and SMB; Randy Lawton, Practice Principal in the Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. So thanks to you all.
This is Dana Gardner, Principal Analyst at Interarbor Solutions. Also, thanks to our audience for listening, and come back next time.
Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: HP.
For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.
Transcript of a sponsored podcast discussion in conjunction with an HP video series on the best practices for developing a common roadmap for DCT. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.You may also be interested in:
- Continuous Improvement and Flexibility Are Keys to Successful Data Center Transformation, Say HP Experts
- HP's Liz Roche on Why Enterprise Technology Strategy Must Move Beyond the 'Professional' and 'Consumer' Split
- Well-Planned Data Center Transformation Effort Delivers IT Efficiency Paybacks, Green IT Boost for Valero Energy
- Hastening Trends Around Cloud, Mobile Push Application Transformation as Priority, Says Research
- Data Center Transformation Includes More Than New Systems, There's Also Secure Data Removal, Recycling, Server Disposal
No comments:
Post a Comment