Friday, June 04, 2010

Analysts Probe Future of Client Architectures as HTML 5 and Client Virtualization Loom

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 52 on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 52. I'm your host and moderator Dana Gardner, principal analyst at Interarbor Solutions.

This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS Business Process Management System.

Our topic this week on BriefingsDirect Analyst Insights Edition focuses on client-side architectures and the prospect of heightened disruption in the PC and device software arenas.

Such trends as cloud computing, service oriented architecture (SOA), social media, software as a service (SaaS), and virtualization are combining and overlapping to upset the client landscape. If more of what more users are doing with their clients involves services, then shouldn't the client be more services ready? Should we expect one client to do it all very well, or do we need to think more about specialized clients that might be configured on the fly?

Today's clients are more tied to the past than the future, where one size fits all. Most clients consist of a handful of entrenched PC platforms, a handful of established web browsers, and a handful of PC-like smartphones. But, what has become popular on the server, virtualization, is taken to its full potential on these edge devices. New types of dynamic and task specific client types might emerge. We'll take a look at what they might look like.

Also, just as Windows 7 for Microsoft is quickly entering the global PC market, cloud providers are in an increasingly strong position to potentially favor certain client types or data and configuration synchronization approaches. Will the client lead the cloud or vice versa? We'll talk about that too.

Either way, the new emphasis seems to be on full-media, webby activities, where standards and technologies are vying anew for some sort of a de-facto dominance across both rich applications as well as media presentation capabilities.

We're going to look at the future of the client with a panel of analysts and guests. Let me introduce them. I am going to welcome Chad Jones. He is the Vice President for Product Management at Neocleus. Welcome, Chad.

Chad Jones: Thank you, Dana. I'm glad to be here.

Gardner: We're also here with Michael Rowley, CTO of Active Endpoints. Welcome, Michael.

Michael Rowley: Thank you.

Gardner: We're also here again with Jim Kobielus, Senior Analyst at Forrester Research. Hi, Jim.

Jim Kobielus: Hi, Dana. Hi, everybody.

Gardner: And Michael Dortch, Director of Research at Focus. Hello, Michael.

Michael Dortch: Greetings, everyone. Thanks, Dana.

Gardner: JP Morgenthal, Chief Architect, Merlin International. Hi, JP.

JP Morgenthal: Hi, Dana. Hi, everyone.

Gardner: And Dave Linthicum, CTO, Bick Group. Welcome back, Dave.

Dave Linthicum: Hey guys.

Gardner: Let me go first to Chad Jones. Tell us where you see virtualization impacting the edge device, the client. Are we to expect something similar in terms of disruption there than the same as what we have seen on servers?

Time for disruption

Jones: Dana, in the client market, it's time for disruption. Looking at the general PC architectures, we have seen that since pretty much the inception of the computer, you really still have one operating system (OS) that's bound to one machine, and that machine, according to a number of analysts, is less than 10 percent utilized.

Normally, that's because you can't share that resource and really take advantage of everything that modern hardware can offer you. Dual cores and all the gigabytes of RAM that are available on the client are all are great things, but if you can't have an architecture that can take advantage of that in a big way, then you get more of the same.

On the client side, virtualization is moving into all forms of computing. We've seen that with applications, storage, networks, and certainly the revolution that happened with VMware and the hypervisors on the server side. But, the benefits from the server virtualization side were not only the ability to run multiple OSs side-by-side and consolidate servers, which is great, but definitely not as relevant to the client side. It’s really the ability to manage the machine at the machine level and be able to take OSs and move them as individual blocks of functionality in those workloads.

The same thing for the client can become possible when you start virtualizing that endpoint and stop doing management of the OS as management of the PC, and be able to manage that PC at the root level.

Virtualization is a key enabler into that, and is going to open up PC architectures to a whole brave new world of management and security. And, at a platform level, there will be things that we're not even seeing yet, things that developers can think of, because they have options to now run applications and agents and not be bound to just Windows itself. I think it’s going to be very interesting.

With virtualization, you have a whole new area where cloud providers can tie in at the PC level. They'll be able to bundle desktop services and deliver them in a number of unique ways.



Gardner: Chad, we're also seeing, of course, this welling of interest in cloud and SaaS, where services are coming off the Internet for applications and increasingly for entertainment, and to consumers as movies and video clips and full media. Is there something going on here between the two trends, where virtualization has some potential, but cloud computing is also ramping up? Is there some way that the cloud will be delivering virtualized instances of runtimes for client? Is that in the offing?

Jones: Well, number one, anything is possible out there. But, I definitely see that there's a huge trend out there in hosted desktops through virtual desktop infrastructure (VDI), not only from a private cloud standpoint with an internal set of hosted desktops. Some companies are creating and working with some of the largest telcos to provide hosted VDI externally, so that all that infrastructure doesn’t have to be managed by the enterprise itself. It can actually be as a hosted service.

That would be an external semi-public, private cloud, and all the way down to full public clouds, where desktops would be hosted in that cloud.

Now, if you look at the trending information, it seems that VDI, in general, will niche out at about 15 percent of overall desktops, especially in the enterprise space, leaving still 85-90 percent of desktops still requiring that rich client experience.

But, with virtualization, you have a whole new area where cloud providers can tie in at the PC level. They'll be able to bundle desktop services and deliver them in a number of unique ways -- streaming or synchronization of VHD and things like that -- but still have them be compartmentalized into their own runtime environments.

Personal OS

Imagine that you have your own personal Windows OS, that maybe you have signed up for Microsoft’s new Intune service to manage that from the cloud standpoint. Then, you have another Google OS that comes down with applications that are specific from that Google service, and that desktop is running in parallel with Windows, because it’s fully controlled from a cloud provider like Google. Something like Chrome OS is truly a cloud-based OS, where everything is supposed to be stored up in the cloud.

Those kinds of services, in turn, can converge into the PC, and virtualization can take that to the next level on the endpoint, so that those two things don’t overlap with each other, and a level of service, which is important for the cloud, certainly for service level agreements (SLAs), can truly be attained. There will be a lot of flexibility there.

Gardner: Dave Linthicum, we're thinking now about cloud providers, not just delivering data services and applications, but perhaps delivering their own version of the runtime environment on the client. Is that in the purview of cloud providers or are we talking about something that’s perhaps dangerous?

Linthicum: I don’t think it’s dangerous. Cloud providers will eventually get into desktop virtualization. It just seems to be the logical conclusion of where we're heading right now.

In other words, we're providing all these very heavy-duty IT services, such as database, OSs, and application servers on demand. It just makes sense that eventually we're going to provide complete desktop virtualization offerings that pop out of the cloud.

The beauty of that is that a small business, instead of having to maintain an IT staff, will just have to maintain a few clients. They log into a cloud account and the virtualized desktops come down.

It provides disaster recovery based on the architecture. It provides great scalability, because basically you're paying for each desktop instance and you're not paying for more or less than you need. So, you're not buying a data center or an inventory of computers and having to administer the users.

That said, it has a lot more cooking to occur, before we actually get the public clouds on that bandwagon. Over the next few years, it's primarily going to be an enterprise concept and it's going to be growing, but eventually it's going to reach the cloud.

Gardner: This is something that might emerge in a private cloud environment first and then perhaps migrate out toward more consumer or public cloud environments.

Linthicum: Absolutely. Public cloud is going to be the destination for this. There are going to be larger companies. Google and Microsoft are going to jump on this. Microsoft is a prime candidate for making this thing work, as long as they can provide something as a service, which is going to have the price point that the small-to-medium-sized businesses (SMBs) are going to accept, because they are the early adopters.

Gardner: Michael Rowley at Active Endpoints, you're in the business of providing enterprise applications, business management, process management, and you have decided a certain approach to this on your client that isn’t necessarily a cloud or SaaS delivery model but nonetheless takes advantage of some of these technologies. Tell us what Active Endpoints did to solve its client issues with its business process management (BPM)?

Browser-based client

Rowley: When we talk about the client, we're mostly thinking about the web-browser based client as opposed to the client as an entire virtualized OS. When you're using a business process management system (BPMS) and you involve people, at some point somebody is going to need to pull work off of a work list and work on it and then eventually complete it and go and get the next piece of work.

That’s done in a web-based environment, which isn’t particularly unusual. It's a fairly rich environment, which is something that a lot of applications are going to. Web-based applications are going to a rich Internet application (RIA) style.

We have tried to take it even a step further and have taken advantage of the fact that by moving to some of these real infrastructures, you can do not just some of the presentation tier of an application on the client. You can do the entire presentation tier on the web browser client and have its communication to the server, instead of being traditional HTML, have the entire presentation on the browser. Its communication uses more of a web-service approach and going directly into the services tier on the server. That server can be in a private cloud or, potentially, a public cloud.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system.



What's interesting is that by not having to install anything on the client, as with any of these discussions we are talking about, that's an advantage, but also on the server, not having to have a different presentation tier that's separate from your services tier.

You go directly from your browser client into the services tier on the server, and it just decreases the overall complexity of the entire system. That's possible, because we base it on Ajax, with JavaScript that uses a library that's becoming a de-facto standard called jQuery. jQuery has the power to communicate with the server and then do all of the presentation logic locally.

Gardner: One of the things that's interesting to me about that, Michael, is that because we're talking about HTML5 and some new standards, one possible route to the future would be this almost exclusive browser based approach. We've seen a lot of that over the past decade or more, enough so that it even threatened Microsoft and its very identity as a client OS company.

But, we've run into some friction and some fragmentation around standards, things like Adobe versus Apple versus Silverlight, and the varying RIA approaches. Do you think that HTML5 has the potential to solidify and standardize the market, so that the browser approach that you have been describing could become more dominant than it is even now?

Push toward standards

Rowley: I think it will. I really do. Everybody probably has an opinion on this. I believe that Apple, growing dominant in the client space with both the iPhone and now the iPad, and its lack of support for either Silverlight or Flash, will be a push toward the standard space, the HTML5 using JavaScript, as the way of doing client-based rich Internet apps. There will be more of a coalescing around these technologies, so that potentially all of your apps can come through the one browser-based client.

Gardner: Of course, Google seems to be behind that model as well.

Rowley: Absolutely.

Gardner: So, here we have potentially two different approaches -- an HTML5 oriented world, more web-based, more services-based -- but also we have a virtualization capability, where we could bring down specialized runtime environments to support any number of different legacy or specialized applications.

Let's go to our panel. Michael Dortch, isn't this the best of both worlds, if we could have standardization and comprehensive browser capabilities and, at the same time, a virtualized environment, where we could support just about anything we needed to, but on the fly?

Dortch: Dana, my sainted, and very wise, mother used to say, where you stand depends on where you sit. So, whether or not this is a good thing depends entirely on where you sit, whether or not this is the best of both worlds or the best of all possible worlds. From a developer standpoint, I want one set of tools, right?

Gardner: Well, that's unlikely.

Dortch: Right, it's highly unlikely, but my mom also used to say, I was naively optimistic, so I am just going to plow forward here. Let me be more realistic. I want as few tools to manage and to learn as possible to reach the largest number of paying customers for this software that I'm trying to create. "Write once -- sell many times" is the mantra.

To get there, we're going to need a set of open standards, a set of really compelling services, and a set of really easy-to-use tools. If the model of the cloud has taught us anything yet, it's that, at the end of the day, I shouldn't have to care what those individual components are or even where they come from, but we know it's going to be a long, convoluted journey to get to that ideal space.

So the question becomes, if I am a developer with limited resources, what path do I go down now? I really don't think we know enough to answer that question. The Flash debate about Apple and its iPhone and its iPad hasn't seemed to shut down the Apple iTunes App Store yet, and I don't see that happening anytime soon.

Gardner: Adobe isn't going out of business either, nor is Microsoft.

Dortch: Exactly, exactly. Every time a Starbucks opens near me, none of the local coffee shops close. I don't get it, but it's the truth. So, at the end of the day, all that really matters in all of this discussion is a very short list of criteria -- what works, what's commercially feasible, and what's not going to require a rip and replace either by users or by developers. There's too much money on the table for any of the major players to make any of these things onerous to any of those communities.

Proprietary approaches

So, yes, there are going to continue to be proprietary approaches to solving these problems. As the Buddhists like to say, many paths, one mountain. That's always going to be true. But, we've got to keep our eyes on the ultimate goal here, and that is, how do you deliver the most compelling services to the largest number of users with the most efficient use of your development resources?

Until the debate shifts more in that direction and stops being so, I want to call it, religious about bits and bytes and speeds and feeds, progress is going to be hampered. But, there's good news in HTML5, Android, Chrome, and those things. At the end of the day, there's going to be a lot of choices to be made.

The real choices to be made right now are centered on what path developers should take, so that, as the technologies evolve, they have to do as little ripping and replacing as possible. This is especially a challenge for larger companies running critical proprietary applications.

Gardner: So, we've taken the developer into consideration. JP Morgenthal is a chief architect for a systems integrator (SI). What do you like in terms of a view of the future? Do you like the notion of a web-based primary vehicle for the new apps, and perhaps a way of supporting the older apps via virtualization services? What's your take architecturally?

Morgenthal: I like to watch patterns. That's what I do. Look at where more applications have been created in the past three years, on what platform, and in what delivery mechanism than in any other way. Have they been web apps or have they been iPhone/Android apps?

You've got to admit that the web is a great vehicle for pure dynamic content. But, at the end of the day, when there is a static portion of at least the framework and the way that the information is presented, nothing beats that client that’s already there going out and getting a small subset of information, bringing it back, and displaying it.

I see us moving back to that model. The web is great for a fully connected high-bandwidth environment.

I've been following a lot about economics, especially U.S. economics, how the economy is going, and how it impacts everything. I had a great conversation with somebody who is in finance and investing, and we joked about how people are claiming they are getting evicted out of their homes. Their houses and homes are being foreclosed on. They can barely afford to eat. But, everybody in the family has an iPhone with a data plan.

Look what necessity has become, at least in the U.S., and I know it's probably similar in Korea, Japan, and parts of Europe. Your medium for delivery of content and information is that device in the palm that's got about a 300x200 display.

The status thing

Kobielus: That was very funny. When people lose their fortunes, the last thing that the wives pawn is their jewelry. It’s the status items they stick with. So, the notion that the poor, broke family all have iPhones and everything is consistent with that status thing.

Morgenthal: Somebody sent me a joke the other day talking about how 53 percent of women find men with iPhones more attractive than those with Palm Pres and BlackBerry.

Gardner: So, JP, if I understand you, what you're saying is that the iPhone model, where you have got a client-server approach, but that client can come down freely and be updated as a cloud service to you, is the future.

Morgenthal: Yeah. And, on the desktop, you have Adobe doing the same thing with AIR and its cross-platform, and it's a lot more interactive than some of the web stuff. JavaScript is great, but at some point, you do get degradation in functionality. At some point, you have to deliver too much data to make that really effective. That all goes away, when you have a consistent user interface (UI) that is downloadable and updatable automatically.

I have got a Droid now. Everyday I see that little icon in the corner; I have got updates for you. I have updated my Seismic three times, and my USA Today. It tells me when to update. It automatically updates my client. It's a very neutral type of platform, and it works very, very well as the main source for me to deliver content.

Virtualization is on many fronts, but I think what we are seeing on the phone explosion is a very good point. I get most of my information through my phone.



Now, sometimes, is that medium too small to get something more? Yeah. So where do I go? I go to my secondary source, which is my laptop. I use my phone as my usual connectivity medium to get my Internet.

So, while we have tremendous broadband capability growing around the world, we're living in a wireless world and wireless is becoming the common denominator for a delivery vehicle. It's limiting and controlling what we can get down to the end user in the client format.

Gardner: Let’s go back to Chad Jones at Neocleus. Tell us how the smartphone impact here plays out. It almost seems as if the smartphone is locking us down in the same way the PC was 15 or 20 years ago, with some caveats about these downloadable and updatable apps or data. How does that fit into virtualization? Is it possible to virtualize the smartphone as well and get the best of something there?

Jones: First of all, I'm very happy to hear that women find guys with the iPhone more attractive, because I am talking on my iPhone with you guys right now. So, this is a good thing. I feel like I need to walk outside.

Virtualization is on many fronts, but I think what we are seeing on the phone explosion is a very good point. I get most of my information through my phone. Through the course of my day, when I'm not sitting in front of my PC, it almost becomes my first source of a notification of information. I get to get into my information. I get to see what the basics of whatever that piece of information is.

Normally, if I want to go start researching deeper into it or read more into it, then the limiting factor of that screen and those types of things that we were talking about drives me to my PC.

More coming through

I
think that you're definitely going to see more and more apps and those types of things coming through to the phones, but just by the sheer form factor of the phone, it's going to limit you from what you're able to do.

Now, what is that going to end up being? Is it going to be, yes, I am going to continue to have my laptop in my bag? I think that's going to be true for quite a while now. But, I certainly can see that, in the future, there could be just a sleeve that you throw your phone in and it just jacks up the screen resolution. Now, you have a form factor that you can work through.

But, to take it back to your whole question of virtualization on a phone, we haven’t seen the same type of platform-related issues in applications to a great extent yet, where it comes to conflicts and require a different phone, an OS version.

Is it readily working from app version to app version that you see on the PC. From an app virtualization standpoint, I don’t think that there is a big need there yet, until the continuation of those apps gets more complex. Then, maybe it will run into those issues. I just don’t see that that's necessarily going to happen.

From a multi-OS standpoint that virtualization would pull in, even from a management standpoint, I don’t think the platforms have the same issues that you're going to see inside of the PC platform. For me, the jury is still out on where virtualization and if virtualization would truly play on the phone model.

In the future, there could be just a sleeve that you throw your phone in and it just jacks up the screen resolution. Now, you have a form factor that you can work through.



Gardner: Let me flip it around then Chad. If more people like JP are getting more information and relying more on their phone, but they need that form factor and they need to support those legacy apps inside of an enterprise environment, why not virtualize the smartphone on the PC?

Jones: That would be interesting. Something from a reverse standpoint, absolutely. If it comes to a point where applications are primarily built for, let's say, the iPhone, you want to be able to have that emulator or something like that. That could definitely be a wave of the future. That way, you are crossing the bridges between both platforms. That could be an interesting approach at virtualization, but it's going to be on the PC side.

Dortch: I can't let this part of the conversation go by without raising a few user-centric concerns. Anyone who has done a webinar has clicked the button that says "Next Slide," and then died quietly inside waiting for the slide to load, because there has been latency on the net, some technological problem, or something like that -- whether you're an attendee or a presenter at one of these webinar conferences.

So, I'm thinking, if I am trying to do business-critical work under deadline, if it's the end of the quarter and I am trying to close a deal or something like that, and I click the button that's supposed to download the next virtualized client service that I am supposed to be using and it doesn’t load, I am going to start putting together a list of hostages I plan to take in the next few minutes.

Gardner: That's a point that's always there Michael. We all need ubiquitous broadband. There is no question about it.

Moving complexity

Dortch: Yeah, but I worry about what I've seen. When you talk about watching patterns, over the past 30, 35 years, one of the things I've seen is that complexity rarely goes away but it moves around a lot.

Is one of the thing that may be holding back client virtualization the simple fact that, when you look at the limitations of most client devices, especially hand-held client devices, even smartphones, and you look at the limitations, not only of the networks of the service providers but of their abilities to even monitor and bill accurately for such granular services, aren’t these things sort of like also slowing down the growth of these technologies that offer a lot of really great promise, but just don't seem to have taken off just yet?

Gardner: Sure there are going to be limiting factors, but we're trying to look at this also through an enterprise lens. We're thinking about how to support the old and the new, but do it in such a way that we're not tied to a client-side platform limitation, but we're really limited only by what we tend to do in terms of business process and applications and data.

Dave Linthicum, let's go back to you. The discussion about whether it's a PC or a smartphone, whether it's HTML5, web e-services, or a virtualized runtime environment, do these become moved pretty quickly when you think about the course of the application logic and that it's primarily becoming a business process across ecosystems of services and perhaps hybrids of suppliers?

It's the ability to put everything that I own and that I work with, and all my files and all my information, up into a provider, a private cloud.



Linthicum: Yeah, it's going to completely move. There are some prototypes today, such as the stuff Google provides, and they do it on mobile devices, as well as web, and they also provide their own OS, which is web-based. That, in essence, is going to be kind of a virtualized client, such as what we are talking about during this discussion. But, going forward, it's really not going to make a difference.

If you think about it, we're going to have these virtualized desktops, which come out of the cloud we talked about earlier, which communicate with our computers, but also communicate with cellphones any way in which we want to externalize those applications to us to become part of the process. That's where we are heading.

The power of the cloud, the power of cloud computing, the power of virtualized desktops such as this have the ability to do that. It's the ability to put everything that I own and that I work with, and all my files and all my information, up into a provider, a private cloud, and then have them come down and use them on whatever desktop, whatever device, that I want to use, whether it's pad computing, or whether it's on my TV at home at night. We're heading in that direction. We're getting used to that now.

As JP pointed out, we use our cellphones more than our computers every day. I guarantee you, half the guys on the call today have iPads. Admit it guys, you do. And, we're using those devices as well. We're starting to carry these things around, and ultimately, we are learning how to become virtualized onto itself.

I spent this weekend making sure I put up into Google everything that I have, so that I can get them to the different devices out there. That's where things are going to head.

Gardner: So, the synchronization in the config files, in the data files in the sky, that's the real lock in. That's where your relationship with the vendor counts, and increasingly, an abstraction off of the client allows you to have less and less of a true tie-in there.

Let's go to Jim Kobielus. Do you like the idea of a cloud-based world where the process and data in the sky is your primary relationship, and it's a secondary relationship, as JP said, towards whatever the client is?

Getting deconstructed

Kobielus: Yeah. In fact, it's the whole notion of a PC being the paradigm here that's getting deconstructed. It has been deconstructed up the yin yang. If you look at what a PC is, and we often think about a desktop, it's actually simply a decomposition of services, rendering services, interaction services, connection and access, notifications, app execution, data processing, identity and authentication. These are all services that can and should be virtualized and abstracted to the cloud, private or public, because the clients themselves, the edges, are a losing battle, guys.

Try to pick winners here. This year, iPads are hot. Next year, it's something else. The year beyond, it's something else. What's going to happen is -- and we already know it's happening -- is that everything is getting hybridized like crazy.

All these different client or edge approaches are just going to continue to blur into each other. The important thing is that the PC becomes your personal cloud. It's all of these services that are available to you. The common denominator here for you as a user is that somehow your identity is abstracted across all the disparate services that you have access to.

All of these services are aware that you are Dave Linthicum, coming in through your iPad, or you are Dave Linthicum coming in through a standard laptop web browser, and so forth. Your identity and your content is all there and is all secure, in a sense, bringing process into there.

A lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.



You don't normally think of a process as being a service that's specific to a client, but your hook into a process, any process, is your ability to log in. Then, have your credentials accepted and all of your privileges, permissions, and entitlements automatically provisioned to you.

Identity, in many ways, is the hook into this vast, personal cloud PC. That’s what’s happening.

Gardner: So, if I understand this correctly, we're saying that the edge device isn’t that important. And, as we have said in past shows, where the cloud exists it isn't that important: private, public, an intranet, a grid utility.

What is important? Are we talking about capturing the right data and the right configuration and metadata that creates the process? And if that's the case, Michael Rowley, that might be good news for you guys, because you're in BPM. Can we deconstruct what's important on the server and on the edge, and what's left?

Rowley: That's a great question, because a lot of applications will really mix up the presentation of the work to be done by the people who are using the application, with the underlying business process that they are enabling.

If you can somehow tease those apart and get it so that the business process itself is represented, using something like a business process model, then have the work done by the person or people divided into a specific task that they are intended to do, you can have the task, at different times, be hosted by different kinds of clients.

Different rendering

O
r, depending on the person, whether they're using a smartphone or a full PC, they might get a different rendering of the task, without changing the application from the perspective of the business person who is trying to understand what's going on. Where are we in this process? What has happened? What has to happen yet? Etc.

Then, for the rendering itself, it's really useful to have that be as dynamic as possible and not have it be based on downloading an application, whether it's an iPhone app or a PC app that needs to be updated, and you get a little sign that says you need to update this app or the other.

When you're using something like HTML5, you can get it so that you get a lot of the functionality of some of these apps that currently you have to download, including things, as somebody brought up before, the question of what happens when you aren't connected or are on partially connected computing?

Up until now, web-based apps very much needed to be connected in order to do anything. HTML5 is going to include some capabilities around much more functionality that's available, even when you're disconnected. That will take the technology of a web-based client to even more circumstances, where you would currently need to download one.

It's a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But, once you do, you get a lot of value for it.



Gardner: We're already seeing that with some SaaS apps, including some of the Google stuff, so that doesn't seem to be a big inhibitor. If what I hear you saying, Michael is that the process information, the data, the configuration data is important and valuable.

If we can burst out more capacity on the server and burst down whatever operating environment we need for the client, those things become less of a hurdle to the value, the value being getting work done, getting that business process efficiency, getting the right data to the right people. Or am I overstating it?

Rowley: No, that's exactly right. It's a little bit of a change in thinking for some people to separate out those two concepts, the process from the UI for the individual task. But, once you do, you get a lot of value for it.

Gardner: Chad Jones, do you also subscribe to this vision, where the data process configuration information is paramount, but that bursting out capacity for more cycles on the servers is going to become less of an issue, almost automatic? Then, the issuance of the right runtime environment for whatever particular client is involved at any particular time is also automatic? Do you think that’s where we are headed?

Jones: I can see that as part of it as well. When you're able to start taking abstraction of management and security from outside of those platforms and be able to treat that platform as a service, those things become much greater possibilities.

Percolate and cook

I
believe one of the gentlemen earlier commented that a lot of it needs some time to percolate and cook, and that’s absolutely the case. But, I see that within the next 10 years, the platform itself becomes a service, in which you can possibly choose which one you want. It’s delivered down from the cloud to you at a basic level.

That’s what you operate on, and then all of those other services come layered in on top of that as well, whether that’s partially through a concoction of virtualization and different OS platforms, coupled with cloud-based profiles, data access, applications and those things. That’s really the future that we're going to see here in the next 15 years or so.

Gardner: Dave Linthicum, what’s going to prevent us from reaching that sort of a vision? What’s in the way?

Linthicum: I think security is in the way. Governance, security, the whole control issue, and those sorts of things or fears that are an aid to the existing enterprises and the people who are going to leverage this kind of technology.

The people who are doing computing right now in a non-virtualized world are going to push back a bit on it, because it’s a loss of control. In other words, instead of just having something completely on a system that I'm maintaining, it’s going to be in a virtualized environment, things resourced to me, allocated to me through some kind of a centralized player. And, if they go down, such as Google goes down today, if people are dependent on Google Docs or Gmail or other sorts of things, I'm dead in the water. That’s really going to hinder adoption.

We're going to have to make sure we get systems that are going to comply with the laws that are out there and we need to be very aware of those.



We're going to have to prove that we can do things in a secure, private way. We're going to have to make sure we get systems that are going to comply with the laws that are out there and we need to be very aware of those.

More often than not, we've got to trust some of these players that are going to drive this stuff. This architecture itself is going to be viable, and the players themselves are going to provide a service that’s going to be reliable.

Dortch: I agree with everything David said and, from an enterprise standpoint, I hasten to add, there is the problem of the legacy systems. A lot of people are still running IE 6, and so HTML5 doesn’t really have much to offer them yet. From an IT management standpoint in the enterprise, it’s going to require some pretty fancy dancing in concert with the vendors and the developers who are pushing all this stuff forward to make sure that no critical user base is left behind, as you're moving forward in this way.

Gardner: Well, that’s why we are talking about this as a 15-20 year horizon. It’s not going to happen overnight.

JP Morgenthal, the trust issue. It seems that we've seen vendors trying to capitalize on the client, thinking that if you own the client, you can then control the process. We've seen other vendors say, if we can control the cloud, we can control the process. But, if you can’t control the server environment and you can’t control the client environment.

Why not just go after that all-important set of services. I'm thinking about an ecosystem or marketplace of business processes, perhaps something like what Salesforce is carving out. Any thoughts about who to trust and where the pincher points are in all this?

Interesting dilemma

Morgenthal: Trust is an interesting dilemma in a cyber environment. We're in an environment where the ability to defend is constantly about 10 paces behind those that are attacking. It’s the Wild West and the criminals outnumber the sheriffs 10:1. There is more money to be made robbing the people than there is protecting them.

The other thing that we have to deal with, with regard to trust, is that constant factor of anonymity. Anonymity is very problematic in this environment. Basically, it creates two classes of users. It creates a trust environment user and it creates an anonymous, public Internet user.

In the public Internet, you have your services, and they're potentially advertising-based or driven by some other revenue medium. But, you have to realize you are not going to know who your user is. You're not going to be able to be intimate about your user. Trust is minimal there. You do your best to minimize the potential for loss of data, for inappropriate use, for access to the services. Services are no different than an application at the end of the day.

I had a great meeting with the CSO from the Department of Homeland Security (DHS) and he said it best, "If I could do away with username and passwords, my life would be a billion times easier." Unfortunately, that's the number one medium for identity and credentials in the anonymous Internet. Until the day we have personal identity verification (PIV) cards, and they plug into machines, and we have guaranteed identity authentication given a credible medium, we're going to be dealing with that.

I think we have to assume that we now live in a world where we are going to be attacked. The question is how can we identify that attack quickly?



The alternative is that I'm going to create my secure net, my private net, where only I know the people and the users that are on that medium. That provides me a lot more flexibility and a lot more power. I can control what's happening on that, because I know who my users are.

So, we end up with these two class of users. I don’t see them going away anytime soon. Even in the 20 year realm, the ability to outthink the smartest hacker is unlikely. I think we have to assume that we now live in a world where we are going to be attacked. The question is how can we identify that attack quickly? How can we minimize the potential downside from those attacks? It's a lot like living in a world with terrorists.

Gardner: Jim Kobielus, JP had some interesting thoughts that you need to authenticate through the client or you need to authenticate through the service provider or cloud in order to make this work. But, is there a possibility that authentication could evolve to a cloud service? You authenticate through a process of some kind.

I'm going out on a limb here, clearly, but you're the guy who tracks BPM and data. Where does the enterprise environment fall in this? Is there a way to decompose the client and the server but still have enterprise caliber computing going on?

Kobielus: Oh sure, there is. I've sketched out seven layers of client services that can be put into a private cloud. Clearly, one of the critical pieces of infrastructure that the cloud needs to have, as I said, is identity management. It's also very much public key infrastructure (PKI) to enable strong authentication, multi-factor, webs of trust, and so forth.

You need to begin to think through the whole client computing equation, if you were an enterprise, a better rated identity, and look at the common standards, extensible application markup language (XAML) and so forth to enable that or to look at things like OpenID.

Unable to trust

S
o that's quite important, Dana, because fundamentally it's moving away from a world where PCs are personal computers that I trust, they are my resource. I don’t have to depend on anybody else. All my data, my apps, everything is here. I'm moving to a world where it's, PC, personal cloud. It's your cloud that I'm just renting a piece of or I have got a piece of it, where I can't really trust you at all in some fundamental sense.

My mnemonic here for the cloud and why we can't trust it is, bear with me, SLA-HA-NA. SLA -- service level agreements; HA -- high availability; NA -- not applicable, not available. If you don’t have common identity, common security, and common federation standards within an enterprise cloud, then that's not ready for full client virtualization.

Look at the public cloud. Dana, your article on 'Dealing With the Dearth of SLAs in the Cloud' gets to the point where the public cloud is definitely not ready for enterprise-grade client virtualization, until we get identity nailed down, if nothing else.

Quite frankly, I'm a bit jaundiced on that, because in the middle of the last decade, I was with a large B2B trading exchange that was working on better rated identity, trust standards and relationships among thousands upon thousands of companies.

Getting those trust relationships worked out, getting the policies written, getting all the lawyers to agree and getting the common standards just to make one industry specific trading exchange work was fearsomely difficult. Those trust issues are just going to be an ongoing deterrent to the full virtualization of clients into public cloud environments.

That means I've got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that's just an expensive proposition.



Gardner: Well, we've started at reality. We've gone out to a 15-year horizon, and now we are coming back in to the current day. Chad Jones, where does client virtualization fit in well? What does it solve? What’s its value to the typical enterprise, rather than thinking about this in terms of abstractions in the future?

Jones: The first thing is that the term client virtualization ends up getting applied to a lot of different things. Just as a point of clarification, there are virtualized desktops, which are hosted on the server side, like the VDI infrastructures, and then server-based computing of days past or niche status. But, true client virtualization is the ability to abstract away the hardware resource on the endpoint client and then be able to run virtual objects on top of that, and that's hosted locally.

For the near term, as the client space begins to shake out over the next couple of years, the immediate benefits are first around being able to take our deployment of at least the Windows platform, from a current state of, let's either have an image that's done at Dell or more the case, whenever I do a hardware refresh, every three to four years, that's when I deploy the OS. And, we take it to a point where you can actually get a PC and put it onto the network.

You take out all the complexity of what the deployment questions are and the installation that can cause so many different issues, combined with things like normalizing device driver models and those types of things, so that I can get that image and that computer out to the corporate standard very, very quickly, even if it's out in the middle of Timbuktu. That's one of the immediate benefits.

Plus, start looking at help desk and the whole concept of desktop visits. If Windows dies today, all of your agents and recovery and those types of things die with it. That means I've got to send back the PC or go through some lengthy process to try to talk the user through complicated procedures, and that's just an expensive proposition.

Still connect

You're able to take remote-control capabilities outside of Windows into something that's hardened at the PC level and say, okay, if Windows goes down, I can actually still connect to the PC as if I was local and remote connect to it and control it. It's like what the IP-based KVMs did for the data center. You don’t even have to walk into the data center now. Imagine that on a grand scale for client computing.

Couple in a VPN with that. Someone is at a Starbucks, 20 minutes before a presentation, with a simple driver update that went awry and they can't fix it. With one call to the help desk, they're able to remote to that PC through the firewalls and take care of that issue to get them up and working.

Those are the areas that are the lowest hanging fruit, combined with amping up security in a completely new paradigm. Imagine an antivirus that works, looking inside of Windows, but operates in the same resource or collision domain, an execution environment where the virus is actually working, or trying to execute.

There is a whole level of security upgrades that you can do, where you catch the viruses on the space in between the network and actually getting to a compatible execution environment in Windows, where you quarantine it before it even gets to an OS instance. All those areas have huge potential.

This is the great promise of cloud-based computing taken all the way into the application and used throughout the application.



Gardner: It seems as if what you are doing is ameliorating some of the rigidity of the traditional client model but still keeping it in enough of a sense that it's going to satisfy a lot of what enterprises need to do. Is that a fair encapsulation?

Jones: Yeah, absolutely. You have got to keep that rich user experience of the PC, but yet change the architecture, so that it could become more highly manageable or become highly manageable, but also become flexible as well.

Imagine a world, just cutting very quickly in the utility sense, where I've got my call center of 5,000 seats and I'm doing an interactive process, but I have got a second cord dedicated to a headless virtual machine that’s doing mutual fund arbitrage apps or something like that in a grid, and feeding that back. You're having 5,000 PCs doing that for you now at a very low cost rate, as opposed to building a whole data center capacity to take care of that. Those are kind of the futures where this type of technology can take you as well.

Gardner: So, virtualization is bringing flexibility by keeping the same essential model, it’s just a better architectural approach to it.

Michael Rowley, what you guys have been doing at Active Endpoints with your client is perhaps for newer applications getting that stepping stone to the future, but also protecting yourself. Because, if you're running in the browser, you don’t really care so much about what the client is, and you can also extend out from PCs to smartphones pretty quickly.

Rowley: Yes. You end up being able to support clients and support them even as they change what device they are on. They are not maintaining local data, so that they can move from device to device and even take a single task that they're working on, work on it on one kind of form factor at one point and another kind of at another point in time. This is the great promise of cloud-based computing taken all the way into the application and used throughout the application. I really believe a lot more applications are going to be based that way.

Gardner: I've got a sneaking suspicion that organizations that embrace both of these models have, in a sense, put some insurance policies in place, a backwards compatibility, forwards compatibility, services orientation, but also maintaining that all important enterprise levels of security, reliability, control, and management.

Rowley: One of the things that is really new and I think will catch on is this idea that these web-based apps might be able to communicate with the server through what the application considers as the service tier, the business tier, rather than having a presentation tier on the server, because of the fact that the client has gotten powerful enough to do the full presentation on its own.

Gardner: I want to again thank you all for joining. We have been here talking about the future of clients and services with cloud and virtualization impacts, as well as how to keep this in the real world sphere of what enterprises need to do their jobs.

We have been talking with Chad Jones, Vice President for Product Management at Neocleus. Thank you, Chad.

Jones: Thank you, Dana.

Gardner: We have also been here with Michael Rowley, CTO of Active Endpoints. Thanks, Michael.

Rowley: Thanks, Dana.

Gardner: Jim Kobielus, Senior Analyst at Forrester Research. Appreciate your input, Jim.

Kobielus: Always a pleasure.

Gardner: Michael Dortch, Director of Research at Focus. Appreciate it, Michael.

Dortch: Thanks for the opportunity, Dana. Thanks, everyone.

Gardner: JP Morgenthal, Chief Architect, Merlin International. Thank you, JP.

Morgenthal: Thank you, Dana. Fun as usual.

Gardner: Dave Linthicum, CTO, Bick Group. We appreciate your input as well, Dave.

Linthicum: Thanks Dana.

Gardner: I also need to thank our charter sponsor for the BriefingsDirect Analyst Insights Edition podcast, and that is Active Endpoints. This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks for listening and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Charter Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 52 from April 26, 2010 on client-side architectures and the prospect of heightened disruption in the PC and device software arenas. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Wednesday, May 19, 2010

HP Shows Benefits From Successful Application Consolidation With Own Massive Global Supply Chain Project

Transcript of a BriefingsDirect podcast on how HP tackled an internal multi-year effort to streamline supply chain efficiencies and effectiveness through applications consolidation.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on learning about best practices and execution accelerators for large, often global application-consolidation projects. We'll take a look a telling case-study, a massive multi-year application-consolidation project at HP that involved hundreds of applications and thousands of people around the world.

We'll look first at why the Global Part Supply Chain project at HP was undertaken, but just as importantly, why the project needed to be invigorated after it bogged down by sheer scale and complexity. The project quickly became hugely successful, however, and we'll learn how and why.

These are by no means trivial projects, and often involve every aspect of IT, as well as require a backing of the business leadership and the users to be done well. The goal through these complex undertakings is to radically improve how applications are developed, managed, and governed across their lifecycle to better support dynamic business environments. The stakes, therefore, are potentially huge for both IT and the business.

We're here with an executive from HP to look at proper planning and execution for massive application-consolidation projects by specifically looking at an HP project itself.

Please join me now in welcoming Paul Evans, Worldwide Marketing Lead on Applications Transformation at HP. Welcome to the show, Paul.

Paul Evans: Hi, Dana.

Gardner: Tell me why applications are so numerous in many of these extended business processes, about the whole notion of part supply chains, and why that's such a big deal for HP.

Evans: As you can imagine, HP is an extremely large organization. It makes products, as well as sells services, etc. In terms of product, just imagine your average PC, or your average server, and think of the number of components that are made up inside of that device. It runs into hundreds of thousands, whether it's memory chips, disk drives, screens, keyboards, or whatever.

For a company like HP, in the event that someone needs a spare part for whatever reason, they don't expect to wait a significant period of time for it to turn up. They want it delivered 24 hours later by whatever means that suits them.

So, it's essential for us to have that global supply chain of spare parts tailored toward the ones that we believe we need more -- rather than less -- and that we can supply those parts quickly and easily and, at the same time, cost effectively. That's important for any organization that is dealing in physical components or in the provision of a service. You want to maintain customer satisfaction or increased customer satisfaction.

Customer centric

For us, it was essential that a massive global supply chain organization was extremely customer-centric, but at the same time, very cost-effective. We were doing our utmost to reduce costs, increase the agility of the applications to service the customers, and fuel growth, as our organization and our business grows. The organization has got to respond to that.

So the primary reasoning here was that this is a large organization, dealing with multiple components with pressures on it both from the business and the IT sides.

Gardner: With HP, of course, there have been mergers and acquisitions over the years. Suppliers come and go. Product lines may start small, but then ramp up rapidly. So, we're talking about many different organizations and many different applications that had to be brought into this now hopefully lean process.

Evans: One of the primary reasons we had to do this is that HP has been an amalgam of companies like Hewlett-Packard, originally, Compaq, Tandem, DEC. All of these organizations had their own bills of materials, their own skills, and basically this thing has just grown like Topsy.

What we were trying to do here was to say that we just couldn't continue to treat these systems as un-integrated. We had a lot of legacy environments that were expensive to run, a lot of redundancy, and a lot of overlap.

The whole notion of this coming about through mergers and acquisitions is very common in the marketplace. It's not unique just to HP.



The goal here clearly was to produce one integrated solution that treated the HP customer as an individual, and in the back-end consolidated the applications -- the ones we really needed to move forward. And also, a goal was to retire those applications that were no longer necessary to support the business processes.

The whole notion of this coming about through mergers and acquisitions is very common in the marketplace. It's not unique just to HP. The question of whether you just live with everybody’s apps or you begin to consolidate and rationalize is a major question that customers are asking themselves.

Gardner: If you look at this problem from the perspective of IT, of course, you have multiple platforms, legacy applications, and a mixture of different architectures and hardware. And they have their own set of requirements.

But, if you look at it through the lens of the user, many users are wed to that application, even if it's an older interface. They don't really care about what's underlying in terms of infrastructure. They just want to be able to get their work done without being disrupted.

What we have are perhaps multiple agendas that need to be aligned, and politics and persuasion come into play. Tell me what may have gone awry for a period of time with this project and how some of these other issues about multiple agendas can be managed?

Siloed thinking

Evans: Well, this is a challenge in any situation, and this has been true not only with this particular supply chain project with HP, but for all of us. The rationalization that has taken place inside HP around its IT organization and technology is that because we are human beings, most people think in a very siloed way.

They see their suite of applications supporting their business. They like them. They love them. They’ve grown up with them, and they want to continue using them. Their view is, "Mine is perfect to suit my business requirement. Why would I need anything else?"

That's okay, when you're very close to the coalface. You can always make decisions and always deem to the fact that the applications you use are strategic -- an interesting word that a lot of people use. But, as you zoom out from that environment and begin to get a more holistic view of the silos, you can begin to see that the duplication and replication is grossly inefficient and grossly expensive.

We've seen that in HP. We saw it in this particular supply chain situation. We were looking at three totally different solutions in three different companies: Compaq, HP, and DEC. We were looking over 300 applications. Clearly, that was not the way forward, because it wasn't only just a cost-reduction exercise.

If you're looking into the future and saying you need a much faster, speedier, agile situation to be working with, you can't do that in the whole legacy environment.



If you're looking into the future and saying you need a much faster, speedier, agile situation to be working with, you can't do that in the whole legacy environment. It's just something that's tying you down. That problem is not unique to HP. I definitely understand that.

Gardner: When you decided to look into your parts supply chain activities, I understand there were hundreds of applications involved, multiple sites, geographies, and countries. Was this something that was driven by the business? Was it driven by IT? Both? How did the impetus for this begin?

Evans: Well, from the IT side, there was clearly a view from the top down that said living with 300 applications in the supply-chain world was unacceptable. But also from the business side, the real push was that we had to improve certain metrics. We have this metric called Spend-to-Revenue ratio which is, in fact, what are we spending for parts as opposed to what we are getting in terms of revenue? We were clearly below par in those spaces.

We had some business imperatives that were driving this project that said we needed to save money, we needed to be able to deliver faster, and we needed to be able to do it more reliably. If we tell a customer they're going to get the part within 24 hours, we deliver in 24 hours -- not 36 or 48, because we weren't quite sure where it was. We had to maintain the business acumen.

Complexity kicks in

At the same time, when viewed from a technological angle, we were running old, expensive applications. As always, when you're running far too many applications, the complexity kicks in. How does that all work?

This volume of applications -- or applications bloat, as some people call it -- is a real impediment to agility. You just can't move forward quickly with 300 apps in an environment where you know you're probably looking at a tenth of that. It’s a bit like saying, "How could I run fast if I have 300 feet?" You can't. You can do it with two, but you couldn't do it with 300.

So, our whole goal here was to align business and IT in terms of a technological response to a business driver.

Gardner: From the business side, I suppose they're very concerned about business process, primarily, the applications. They're probably not concerned about some of the more strategic, long-term IT implications -- those being how do we better manage applications as a life-cycle.

At that point, it gives IT the opportunity to come and say, "Let's look at this methodologically. How do we now put in the governance, put in the processes where we can create the applications, manage them, but also sunset them appropriately?"

These two people were the drivers. The buck stopped with these people. They had to make the big decisions.



So how does that factor in, that notion of making this a mature process, and no longer a cherry-picking, complex stew of different styles?

Evans: The area you just addressed is probably one of the primary ones. When we submitted the project, we were basically driving it by committee. Individual business units were saying, "I need applications x, y, z." Another group says, "Actually, we need a, b, c." There was virtually no ability to get to any consensus. The goal here is to go from 300 apps to 30 apps. We’re never going to do it, if you could all self-justify the applications you need.

What we did was discard the committee approach. We took the approach, basically led by one person from the business side, who had supply chain experience, and one from the IT side who had supply chain experience, but both had their specialist areas. These two people were the drivers. The buck stopped with these people. They had to make the big decisions.

To support them, they had a sponsorship committee of senior executives, to which they could always escalate, if there was a problem making a final decision about what was necessary.

Randy Mott, the HP CIO, has the direct support of Mark Hurd, the HP chairman and CEO. In my experience, that's absolutely essential in any project a customer undertakes. They have to have executive sponsorship from the top.

If you don't, any time you get to an impasse, there's no way out. It just distills into argument and bickering. You need somebody who's going to make the decision and says, "We're going this way and we're not going that way."

Getting on track

So for us, setting up this whole governance team of two people to make the hard decisions, and their being supported by a project management team who are there to go off and enact the decisions that were made was the way we really began to move this project forward, get it on track, get it on time, and get it in budget.

Gardner: I see. So the impasse -- the bogging down of this process where it sort of went off the rails in terms of an expected time line -- that's where it was bogged down by committee versus committee. It's when you broke through that, almost at an organizational managerial level, that you were able to accelerate. Is that right?

Evans: Absolutely. In my interaction with customers, I see this time-and-time again. We’ve always said that the experience that HP has gained internally we would share with our customers. We even have a regular customer event, where we share all our best practices and we are not afraid to share the things that go wrong. In this instance, when we started by saying let's have a big committee to help my decisions, it was the wrong approach. We were going nowhere. We had to rationalize and say no.

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

Two respected individuals, one from the IT side and one from the business side, who were totally aligned on what they were doing, shared the same vision in what they were trying to achieve. By virtue of that, we could enforce throughout decisions, sometimes unpopular.

eople sometimes do not understand why a particular application is going to get turned off in place of another. But those were the hard realities we had to take to get the cost down and get the efficiency and the effective result.

Gardner: So, we're talking now about decision-making. We're talking about governance. We're talking about the intersection of IT governance with political governance. This is something you can't buy. You don't necessarily purchase a box that does this. This is a combination of technology, professional services, methodology, standards, experience, and even, I would imagine, a change in management among the leadership.

Tell me how HP internally focused across these multiple disciplines -- not just product, not just technology -- and then how that related to what you do with your supply chain customers now?

Evans: A lot of people would say it's just technological problem. You’ve got 300 apps running on old platforms using old technology and you want to use the latest and greatest, the fastest, the smartest ... whatever. But, as we’ve discussed, at least 50-60 percent of the solution has nothing to do with technology. It had all to do with making decisions, making the right decisions that would lead us to the right outcome for the business.

We knew what we wanted to achieve. We knew that we had to be more agile. We had to get our costs down. We had to optimize this whole spend-to-revenue ratio. As always with the supply chain business, inventory had to go down. Going up is not a good plan, because you're paying for parts that are sitting on the shelf.

Agile and sleek

One of the goals was to get the solution so agile and sleek that we didn't have to use air transportation to get parts from A to B. We could use surface transportation. If we could put the parts in the right place, where they needed to be to get to the right customer, rather having to use air, which of course is very fast, but it’s very expensive compared to surface, we could also get a dramatic reduction in the CO2 emission that we were putting out by virtue of that transportation.

There were lots of things here that had nothing to do with technology. They all had to do with business benefits and outcomes that we wanted to achieve, both internally to HP, like saving money, but also to the customers in terms of delivery of a better service.

Some will call them peripheral, and some may call them fundamental, but things like using different transportation techniques to cut CO2 we felt were pretty important.

What we've done as always with these experiences is translate them into how can we be smarter, better, and more helpful to our customers in learning from these experiences. In the whole HP-IT story, we have outpoured so many best practices and good ideas and bad ideas, which we're quite happy to share with people.

But, of course, hindsight is a wonderful thing.



Similarly, we'd like to think that those organizations that are out there with a supply chain challenge could now look at this and say, "Maybe we could do the same thing." Definitely the alignment between business and IT is probably one of the most paramount of facets. Let me do with which platform, which network, which disk drive, or which operating system. You can have a lot of fun with that. But, in this instance, a lot of the success was driven by setting up the right governance and decision-making structure with the right sponsorship.

Gardner: Now, I'd like to look at some of the paybacks. As I understand it, you turned the corner on this project back in 2006. At the time, you didn't realize that these were "the good old days." The agility and lean aspects of processes in a supply chain are great during growth, but they're also extremely important, when there is a contraction.

So, is there an opportunity to look back and say, "Wow. We didn't know it at the time, but by conducting this application consolidation with the proper governance, we were able to dial down on our delivery of products and services when that was required, and now dial back up." That probably is something of a lesson at the economic level, but can we apply some metrics of success from your project? Any thoughts about what the paybacks were, especially in a topsy-turvy general economy?

Evans: In taking the more holistic view and talking to a lot of customers they would say, "Maybe 18 months ago or two years ago, we knew that we had a legacy app problem. We knew we were spending too much on the underpinning infrastructure, but we could sort of afford it. Was it perfect? No. Was it a bit of a mess? Yes. Should we have really been focusing on the legacy apps issue, thinking maybe the economy is never going to sustain what it was?" But, of course, hindsight is a wonderful thing.

Smarter and better

Now, when I am talking to clients, I mean their comments are, "We need to be smarter. We need to be better. We need to retain our customers, deliver better quality of service to our customers, and we have to do it at a low cost." We've seen a massive change in the approach with the legacy environment, whether that’s applications or infrastructure.

Over the last 12 months what people have realized that it is now time for those organizations that want to remain competitive and innovative. Unfortunately, I still see a lot of companies that believe that doing nothing is the thing to do and will just wait for the economy to rebound. I don't believe it's going to rebound to the same place. It may come back and it may be stronger, but it may end up on a different place.

The organizations that are not waiting for that, but are trying to be innovative, competitive, move away from the competition, and give themselves some breathing space are the ones who are going to sustain themselves.

Within HP, we were not in great shape a few years back in terms of our IT spend. It was way too high, and we openly admit that. We had to take some fairly drastic actions, and it is probably well-known and documented. We went from about 6,000 apps went down to about 1,700 today, and we'll probably plateau at around 1,500. We went from 83 data centers to 6.

We're running a better, faster, cheaper organization that is more agile.



We were not in great shape, but we took action. I don’t believe we took action because we knew the economy was going to change. We don’t think we're that clever. We just had to take it, because economically it was just not the right solution, and nor technically.

We had to focus on driving this both from business and IT. As I said in this small example, we went from 300 apps to 30 apps. We had a 39 percent reduction in our inventory dollars. We reduced our supply chain expenses. We reduced the cost of doing next day delivery. We're heading toward reducing our CO2 emissions by 40 percent on those next day deliveries.

But overall, the global supply chain, this measure of spent revenue, we drove down by 19 percent. We're running a better, faster, cheaper organization that is more agile. As you said, it positions us better to exploit situations as they change and feel that they’ve become more of an opportunity rather than a threat.

Gardner: For those organizations that are in some sort of a multi-year approach, looking at their portfolio of applications, probably shocked by how many there are, what the redundancies are, what the actual landscape looks like, but perhaps also a little bit chagrin by the daunting complexity, where do you suggest they start for resources? Is there a way to start thinking strategically about both the technology, the business, and process issues, as well as those governance, operational, and methodological issues?

First, take stock

Evans: A number of people I have talked to say that their biggest challenge is that they don’t know what they’ve got. So, first and foremost, the advice is always that you need to take stock of what you’ve got, because if you don’t know what you are dealing with, then you’ve got a problem.

I’ll give you an example. I spoke to two large organizations at a recent event we ran here in HP. One organization openly admitted they didn't know what their problems were. They knew they had a massive, complex, and growing applications portfolio that was basically losing touch with the business. That was one side.

Another customer openly admitted that they knew the applications that were causing them problems. They said, "We have these 14 apps that are killing us, and we need to do something about it. We need to streamline those apps. We need to use contemporary technology."

They need to use a new software environment that gives them a much smaller code base, if they are moving from COBOL to something like Java or C#, using new database technologies. Using new testing techniques that don’t mean that we load testing to the end of the session. Then, when time gets tight, what gets cut? The testing gets cut.

Underneath all that is the abilitty to save money, which, of course, is fairly important.



That was a good example we used in the supply chain example. We used an HP product, Quality Center, that gave us this process, this rigor that said, "We're going to test things. We're going to throw out different scenarios, and we're going to test it to death. We're not going to test it to death at the end when it’s too late, but we're going to do that throughout the development cycle so that we can make those adjustments and modifications as we go along."

So, we ended up with a high-quality product at the end. Talking to a lot of customers, the speed by which they can develop, as well as modify, applications, and that is connected directly to customer satisfaction, is paramount.

In the financial services industry, your application is your business. If you are in the telco industry, the level of service you can offer is very much aligned to the application. If you can improve the speed, and the momentum you can create in terms of introducing new editions and you can do those with a very high quality and a high level of integrity, then you're heading towards delivering a much better service to your customer.

There are so many lessons learned here addressing what people have in terms of portfolio and then also delivering new, contemporary, revised types of applications and/or infrastructure.
Underneath all that is the ability to save money, which, of course, is fairly important, isn’t it?

Gardner: We talked a bit about the how and the why of this massive application consolidation projects issue, but this specific Global Part Supply Chain project at HP is now a case-study, which you have written up and which is available for people to get some more detail from. Are there are some other resources, sites, or places where they can go for not only learning more about how you solve this problem, but where they can start on their journey or continue the journey.

Open about experiences

Evans: We have always said that we're going to be very open about our experiences, only because I think people don’t want to begin new things. They don't want to be the first to take a leap, but as I said, pretty much every customer in my mind is doing some form of application transformation, whether small, large, or medium scale.

We have always said that the experiences we gain from our own work we would share openly, and sometimes we’re quite happy to say where we did go wrong. In this instance, we’ve written up a case study to give people an insight in more detail than I have been able to provide today. We're going to post that on our portal. If people want to go there, it’s relatively simple.

It’s http://hp.com/go/applicationtransformation, and they’ll find the case-study. They’ll find videos and other materials of other customers who have embarked on these journeys, whether they’ve been driving that from the top down, from an application’s nature, or whether it’s people who are coming in from the infrastructure, who will say, "I have aging obsolete infrastructure that I need to change, but I know there is a collateral impact on my application. How do I go about that?"

We're trying to cover all the bases in terms of those people that are coming with top-down applications, bottom-up infrastructure, or looking to create a new software environment. If they go to that URL, they can find all the materials, and I hope that they might find useful.

The point is that we get this ability to have an elastic environment.



Gardner: Paul, before we close out, perhaps a look to the future. I've heard so much now about cloud computing and software as a service. This is not necessarily just talking about custom, packaged, and on-premises apps. We now need to think about different sourcing options. How does that relate to this process of application transformation and the rationale around where to go for the best economic bang for the buck?

Evans: Cloud is just a part of the application transformation journey. If you think of history over the last 50 years -- and that’s all technology is; it’s only 50 years old in this space -- we’ve done everything inside. We did everything ourselves. We did everything in big machines, crammed everything in, then we’ve gone more distributed. We’ve gone to PCs and all rest of it. We began to spread the web, before we even knew what the word "Web" meant.

Now, we’ve gotten used to interacting with the Internet, and more importantly, the web. We're beginning to say, you know, maybe there are some services that we don't need inside of firewall, or in a private cloud -- so it’s inside but not inside." The point is that we get this ability to have an elastic environment. We haven’t got dedicated systems to run a service that maybe we only want to use 20 percent of the time.

The notion of using the web or technologies that have been formed from the web development is just like falling off a log. We'll argue probably for at least the next 12 months about what is the cloud and what’s not the cloud, but the use of the World Wide Web is a part of our day-to-day business that's irreversible.

We're never going to go backward now, whether it’s just interaction with consumers who want to get questions answered and order a PC or whatever, or the provision of services that we'll use and our customers will use by utilizing the web. It’s just going to be there.

Also, merging with this whole notion of the cloud is mobility. The mobile, the smartphone, or call it what you want, is going to be the most voluminous device that will attach to the web in the future. People are not just going to want to play games, send SMS, and all the rest of it. They're going to email, they are going to want to do things, and they are going to want to interact in a far richer environment than they do today.

I think these technologies are converging rapidly in terms of a notion that says we’ve got to update and transform what we’ve got and at the same time start the more strategic view of what are we going to incorporate going forward. We may not incorporate them today, but we sure need to leave that socket open that says I may want to plug it in the future.

Gardner: Well great. We've been talking about best practices in execution accelerators around large application consolidation projects. We’ve been joined by Paul Evans, World Wide Marketing Lead on Applications Transformation at HP. Thank you, Paul.

Evans: Thanks, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You’ve been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Access more information on Application Consolidation.
Read the full-length case study on HP's Application Consolidation.
Learn more about the Application Transformation Experience Workshop.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how HP tackled an internal multi-year effort to streamline supply chain efficiencies and effectiveness through applications consolidation. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in:

Tuesday, May 18, 2010

IT's New Recipe for Success: Modernize Applications and Infrastructure While Taking Advantage of Alternative Sourcing

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on improving overall data-center productivity by leveraging all available sourcing options and moving to modernized applications and infrastructure.

IT leaders now face a set of complex choices, as they look for ways to manage their operational budget, knowing that discretionary and capital spending remain tight, even as demand on their systems increases.

One choice that may be the least attractive is to stand still as the recovery gets under way and demands on energy and application support outstrips labor, systems supply, and available electricity.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions. Chances are that demands on IT systems to meet growing economic activity will occur before IT budgets appreciably open up.

So what to do? Our panel of experts today examines how to gain new capacity from existing data centers through both modernization and savvy exploitation of all sourcing options. By outsourcing smartly, migrating applications strategically, and modernizing effectively, IT leaders can improve productivity, while operating under tightly managed costs.

Economists are now seeing the recession giving a way to growth, at least in several important sectors and regions.



We'll also look at some data-center transformation examples with some executives from HP to learn how effective applications and infrastructure modernization improves enterprise IT capacity outcomes. And, we'll examine modernization in the context of outsourcing and hybrid sourcing, so that the capacity goals facing IT leaders can be more easily and affordably met, even in the midst of a fast-changing economy.

As we delve into applications and infrastructure modernization best practices, please join me in welcoming our panel: Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Welcome, Shawna.

Shawna Rudd: Thank you.

Gardner: We're also here with Larry Acklin, Product Marketing Manager for Applications Modernization Services at HP. Welcome, Larry.

Larry Acklin: Hello.

Gardner: And, Doug Oathout, Vice President for Converged Infrastructure in HP’s Enterprise Services. Welcome, Doug.

Doug Oathout: Thank you, Dana. I'm glad to be here.

Gardner: Let me start with you, Doug. We're seeing some additional green shoots now across the economy, and IT services are also being taxed by an ongoing data explosion, the proliferation of mobile devices, use of social media, and new interfaces. So, what happens when the supply of budget -- that is to say, the available funding for innovation in new applications -- is lacking, even as the demand starts to pick up? What are some of the options that IT leaders have?

Tackling the budget

Oathout: Dana, when you look at the budgets still being tight in the tight economy, but business is starting to grow again, IT leaders really need to look strategically at how they're going to tackle their budget problem.

There are multiple sourcing options, there are multiple modernization tasks as well as application culling that they could do to improve their cost structure. What they need to do is to start to think about how, and what major projects they want to take on, so that they can improve their cash flow in the short-term while improving their business outcomes in the long-term.

At HP, we look at: how do I source products that are more beneficial to me -- outsourcing cloud and such -- to give us a better economic picture, and also using modernization techniques for application and infrastructure to improve the long-term cost structures.

At HP we also look at modernization of the software, and we look at outsourcing options and cloud options as ways to improve the financial situation for IT managers.

Gardner: Looking at this historically, have the decisions around outsourcing been made separately from decisions around modernization and infrastructure? Is it now time to bring two disparate decision processes together?

Oathout: Yes. In the past, companies have looked at outsourcing as a final step to IT, versus an alternate step in IT. We're seeing more clients, especially in the tight economy, that we have gone through, looking at a hybrid model.

How do I source things smartly that are non-mission critical or non-business critical to me to the outside world and then keep the stuff that is critical to my business within the four walls of the data center? There is a model evolving, a hybrid model between outsourcing and in-sourcing of different types of applications in different types of infrastructure.

Gardner: Let's go to you, Shawna. When we think about the decisions around sourcing, as Doug just pointed out, there seems to be a different set of criteria being brought to that. How do you view the decision-making around sourcing options as being different now than two, three or five years ago?

Rudd: Clients or companies have a wider variety of outsourcing mechanisms to choose from. They can choose to fully outsource or selectively out-task specific functions that should, in most cases, be able to provide them with substantial savings by looking at their operating expenses. Alternatively, as Doug just pointed out, we can provide many transformational and modernization type of projects that don’t require any outsourcing at all. Clients just have a wider variety of options to choose from.

Gardner: To you, Larry. As folks look at their current infrastructure and try to forecast new demands on applications and what new applications are going to be coming into play, are they faced with an either/or? Is this about rip and replace? How does modernization fit differently into this new set of decisions?

Acklin: It's definitely becoming a major challenge. The problem is that if you look purely at outsourcing in order to have additional investment for innovation, it will take you so far. It will take you to a point.

There needs to be a radical change in most businesses, because they have such a build-up of legacy technology, applications and so forth. There needs to be a radical change in how they move forward so they can free up additional investment dollars to be put back into the business.

Realigning the business and IT

More importantly, it's necessary to realign the business and the application portfolio, so that they're working together in order to address the new challenges that everyone is facing. These are challenges around growth: How do you grow so that, when you come out of a tough economy situation, the business is ready to go.

Investors are expecting that your company is going to accelerate into the future, providing better services to your market. How can you do that when your hands are completely tied, based on your current budget?

You know your IT budgets aren't going to increase rapidly, that there may be a delay before that can happen. So how do you manage that in the interim? That’s really where the combination of modernization and using various sourcing options is going to add additional benefit to be an enabler to get you to that agility that you want to get to.

Gardner: Larry, what would be some of the risks, if this change or shift in thinking and approach doesn't happen? What are some of the risks of doing nothing?

Acklin: We call that "the cost of doing nothing." That's the real challenge. If you look at your current spend and how you are spending your IT budgets today, most see a steady increase in expenses from year-by-year, but aren't seeing the increases in IT budgets. By doing nothing, that problem is just going to get worse and worse, until you're at a point where you're just running to keep the lights on. Or, you may not even be able to keep up.

The number of changes that have been requested by the business continues to grow. You're putting bandages on your applications and infrastructure to keep them alive. Pretty soon, you're going to get to a point, where you just can't stay ahead of that anymore. This is the cost of doing nothing.

If you don’t take action early enough, your business is going to have expectations of your IT and infrastructure that you can't meet. You're going to be directly impacting the ability for the company to grow. The longer you wait to get started on this journey to start freeing up and enabling the integration between your portfolio and your business the more difficult and challenging it's going to be for your business.

Gardner: Doug and Shawna, it sounds as if combining the decisions around modernizing your infrastructure and applications with your sourcing options is, in a sense, an insurance policy against the unknown. Is that overstating the opportunity here, Shawna?

Rudd: I don’t think so. Obviously, to Larry’s point, it's not going to get any cheaper to continue to do nothing. To support legacy infrastructure and applications it's going to require more expensive resources. It's going to require more effort to maintain it.

The same applies for any non-virtualized or unconsolidated environment. It costs more to manage more boxes, more software, more network connections, more floor space, and also for more people to manage all of that.

Greater risk

The risk of managing these more heterogeneous, more complex environments is going to be greater -- a greater risk of outages -- and the expense to integrate everything and try to automate everything is going to be greater.

Working with a service provider can help provide a lot of that insurance associated with the management of these environments and help you mitigate a lot of that risk, as well as reduce your cost.

Gardner: Doug, we can pretty safely say that the managed service providers out there haven’t been sitting around the past two or three years, when the economy was down. Many of them have been building out additional services, offering additional data and application support services. So, IT departments are now not only competing against themselves and their budgets, they are competing against managed service providers. How does that change somebody’s decision processes?

Oathout: It actually gives IT managers more of a choice. If you look at what's critical to your business, what's informational to your business, and you look at what is kind of the workflows that go on in your business, IT managers have many more choices of where they want to go source those applications or those job functions from?

As you look at service providers or outsourcers, there is a better menu of options out there for customers to choose from. That better menu allows you to compare and contrast yourself from a cost, service availability, and delivery standpoint, versus the providers in the marketplace.

IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.



We see a lot of customers really looking at: how do I balance my needs with my cost and how do I balance what I can fit inside my four walls, and then use outsourcing or service providers to handle my peak workloads, some of my non-critical workloads, or even handle my disaster recovery for me?

So IT managers have choices on where to source, but they also have choices on how to handle the capacity that fits within their four walls of the data center.

Gardner: Let’s look at how you get started. What are some of the typical ways that organizations explore sourcing options and modernization opportunities? As I understand it, you have a methodology, a basic three-step approach: outsource, migrate, and modernize.

Let’s take each one of these and start with outsourcing smartly. Shawna, what does that mean, when we talk about these three steps in getting to the destination?

Rudd: From an outsourcing standpoint, it’s simply one mechanism that clients can leverage to facilitate or help facilitate this transformation journey that they may be looking to, as they go on to help generate some savings, which will help fund other maybe more significant modernization or transformational efforts.

We help clients maintain their legacy environments and increase asset utilization, while undertaking those modernization and transformation efforts. From an outsourcing standpoint, the types of things that a client can outsource could vary, and the scope of that outsourcing agreement could vary -- the delivery mechanism or model or whether we manage the environment at a client’s facility or within a leveraged facility.

Bringing value

All those variables can bring value to a client, based upon their specific business requirements. But then, as the guys will talk about in a second, the modernization or the migration and the modernization yields additional savings to those clients’ business.

So, from an outsourcing standpoint, it’s that first thing that will help generate savings for a client and can help fund some of the efforts that will generate incremental savings down the road.

Gardner: The second step involves migration. Who wants to handle that, and what does that really mean?

Oathout: Let me start and then I'll hand it over to Larry. When we talk about migration, we can look at different types of applications that migrate simply to modern infrastructure. Those applications can be consolidated onto fewer platforms into a more workflow-driven automated process.

We can get a 10:1 consolidation ratio on servers. We can get a 5-6:1 consolidation ratio on storage platforms. Then, with virtual connectivity or virtual I/O, we can actually have a lot less networking gear associated with running those applications on the servers and the storage platform.

When you look at modernizing your applications and look at modernizing infrastructure, they have to match.



So, if we look at just standard applications, we have a way to migrate them very simply over to modern infrastructure, which then gives you a lower cost point to run those applications.

Gardner: Now, not all applications are created or used equally. Is there a difference between what we might refer as core or context applications, and does that come into play when we think about this migration?

Oathout: Oh, it definitely does. There are some core applications that are associated with certain platforms that we can consolidate on the bigger boxes, and you get more users that way. Then, there are context applications, which are more information-driven, and which can easily continue to grow. That's one of the application areas that continues to grow, and you can't see how fast it's going to grow, but you can scale that out onto modern platforms.

As you have more work, you have more information, and you can grow those systems over time. You don't have to build the humongous systems to support the application, when it’s just starting out. You can build it over time.

There's a lot we can do with the different types of applications. When you look at modernizing your applications and look at modernizing infrastructure, they have to match. If you have a plan, you don't have to buy extra capacity when you start. You can buy the right capacity then grow it, as you need it.

Specific path

Acklin: Let me add a little bit to that. When we look at these three phases together, we ordered them this way for a specific path to minimize the risk as part of it. Outsourcing can drive some initial savings, maybe up to 40 percent, depending on the scope of what you're looking at for a client. That's a significant improvement on its own.

Not every client sees that high of a saving, but many do. The next step, that migration step that we’ve talked about, where we’re also migrating over to a consolidated infrastructure, allows you to take immediate actions on some of your applications as well.

In that application space, you can move an application that may be costing you significant amounts of the dollars whether it be, license fees or due to a lack of skilled resources and so forth on a legacy platform. Migrating those or keeping the application intact, running on that new infrastructure, can save you significant dollars, in addition to the initial work you did as part of the outsourcing.

The nice thing, as you do these things in parallel, is that it's a phase journey that you are going through, where they all integrate. But, you don't have to. You can separate them. You can do them one without the other, but you can work on this whole holistic journey throughout.

The migration of those applications, basically leaves those applications intact, but allows them to have a longer lifespan than you may typically would. A great example of this is, if you had an application that you want to eventually replace with a ERP system of some sort, or that business process is going to be changed in the future in some way, but we still need to do something about this cost-saving problem today.

When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.



It's a great middle step. We can still drive significant 40-50 percent saving, just through this migration phase of moving that application onto this new infrastructure environment and changing the way that those cost structures around software and so forth are allocated towards that. It frees up short-term gain that can turn around to be reinvested in the entire modernization journey that we're talking about.

Gardner: So, if I understand that correctly, when we get to the modernization phase, we've been able to develop the capacity and develop a transformation of the budget from operations into something that can be devoted to additional new innovation capacity.

Acklin: Right. Then as you continue that journey, you're starting to get your cost structures aligned and you're starting to get to a place where your infrastructure is now flexible and agile. You’ve got the capacity to expand. When you move into that modernized phase, you're really trying to change the structure of those applications, so that you can take advantage of the latest technology to run cloud computing and everything operating as a service.

Future technologies allow us to enable the business for growth in the marketplace. Right now, many of our applications handcuff the business. It takes months to get a new product or service out to the market. By changing over to a service-oriented model, you're saving a lot of cost component here, but you're adding that agility layer to your applications and allowing your business to expand and grow.

Gardner: Before we go to some examples, I'm curious about what happens. What benefits can occur when you play these three aspects of this journey together?

There is sort of a dance, if you will, of three partners. When you apply them to the specific needs, requirements, and growth patterns within specific companies, what types of benefits do we get? Is this about switching to a more pay-as-you-go basis? Is this about reduced labor or improved automation?

Let's start with you, Shawna. What are some of the paybacks that companies typically get when they do this correctly?

Some 30 percent savings

Rudd: They can achieve about 30 percent savings, obviously depending on what they outsource and how much they outsource. Those savings will be achieved through the use of best-shore resources through the right sizing of their hardware and software environments, consolidation, virtualization, automation, standardization, processes, and technologies.

And, then they'll achieve incremental cost savings. As Larry said, it can be upward of 40-60 percent from migrating some of that low-hanging fruits, or those applications that are easily lifted and shifted to lower cost platforms. So, they'll reduce the associated IT and application expenses that are also the ongoing management expense. Then, as they continue to modernize those environments, they'll achieve additional efficiencies and potentially some additional savings.

In that scenario, in which they have combined everything, when they work with a single source provider to help them go through that journey and help facilitate that journey, the transitions, the hand-offs, and all of that should go much more smoothly.

The risk to the client, to the client's business, should be better mitigated, because they're not having to coordinate with four or five different vendors, internal organizations, etc. They have one partner who can help them and can handle everything.

Gardner: Doug, to you. When this is done properly, what are some of the high-level payoffs? What changes in terms of productivity at the most general level?

IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.



Oathout: The big thing that changes, Dana, is that when you go through this journey at the end, IT is aligned to the businesses. So, when a business wants to bring on a new application or a new product line, IT can then respond and stand up a new application in hours instead of months.

They can flex the environment to meet a marketing campaign, so you have the ability to do the transactions when a major TV advertisement goes on or when something happens in the industry. You get the flexibility and you get the efficiencies, but what you really get is IT is acting as a service provider to the line of business, and IT is now a partner with the business versus being a cost center to that business.

That's the big transformation that happens through this three-step process. IT is now seen as adding value to the business versus just being the cost center, and the paybacks are unbelievable.

You move from deploying an application in months to two hours. The productivity of your IT department gets two or three times better. You can now plan to run your data centers or your IT at normal workloads. Then, when peaks come in, you can outsource some of the work to service providers or to your outsource partner.

Your actual IT is running at average load, and you don't have to put all the extra equipment in there for the peak. You actually outsource it, when that peak comes. So, at the end of this journey, there is a whole different business model that is much more efficient, much more elastic, and much more cost-effective to run the business of the future.

Gardner: Larry, to you. What are your more salient takeaways in terms of benefits from doing this all correctly?

Don't have to wait

Acklin: I’ll just add to what Shawna and Doug have said already. One of the bigger benefits that you achieve is that the business doesn't have to wait. Many times, if you're a CIO, you have to tell your business-owners that you've got to wait. "I need to go through. I'm in the midst of this outsourcing operation. I'm trying to change the way we're providing service to the business." That can take time."

The idea of putting the outsourced, migrated, modernized phases together is that they're not sequential. You don't have to do one, then the other, and then the other. You can actually start these activities in parallel. So, you can start giving benefits back to the business immediately.

For example, while you're doing the outsourcing activities and getting that transition set up, you're starting to put together what your future architecture is going to look like for your future state. You have to plan how the business processes should be implemented within the application and the strategic value of each application that you currently have in your portfolio.

You're starting to build that road map of how you are going to get to the end state. And then Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.

You really build that partnership between the two. So, when you reach the end, that is the completely well-oiled machine working together -- both the business and IT -- to reach their objectives.

Even as you continue through that cycle, you're constantly providing benefits back to both the business and IT at the same time.



Gardner: Let’s look at some examples that we mentioned earlier. This can vary dramatically from organization to organization, and coming at this from different angles means that they might prioritize it in different ways. Perhaps we can look at a couple of examples to illustrate how this can happen and what some of the payoffs are. Who wants to step up first for an example on doing these three steps?

Oathout: I'll go first. One example that we worked very closely was in services with our customer France Telecom. France Telecom transitioned 17 data centers to two green data centers. Their total cost of ownership (TCO) calculation said that they were going to save €22 million (US $29.6 million) over a three-year period.

They embarked on this journey by looking at how they were going to modernize their infrastructure and how they were going to set up their new architecture so that it was more flexible to support new mobile phone devices and customers as they came online. They looked at how to modernize their applications so they could take advantage of the new converged infrastructure, the new architectures, that are available to give them a better cost point, a better operational expense point.

France Telecom is a normal example where you consolidate 17 data centers to two, but it’s not abnormal when a company goes through this three-step process, to make a significant change to the IT footprint, make a significant change in how they do their business to support the lines of businesses that require new applications and new users to come online relatively quickly.

Gardner: Doug, how would you characterize the France Telecom approach? Which of the three did they emphasize?

Emphasis on migration

Oathout: They really emphasized the migration as the biggest one. They migrated a number of applications to newer architectures and they also modernized their application base. So, they focused on the last two, the modernization and the migration, as the key components for them in getting their cost reductions.

Gardner: Okay, any other examples?

Acklin: I'll talk about another one. The Ministry of Education in Italy (MIUR) is another good example, where a client has gone on this whole journey. In that situation, they had outsourced some of their capabilities to us -- some of their IT management. But, they were challenged with some difficult times. The economy hit them hard, and being a government agency, they were under a lot of pressure to consolidate IT departments globally.

It’s a very, very large organization built up over the years. Most of the applications were built back in the early 1980s or earlier than that. They were mainframe-based, COBOL, CICS, DB2 type applications, and they really weren’t servicing the business very well. They were really making it a challenge.

In addition to all of the legacy technologies, the CIO also had the challenge of consolidating IT departments. They had distributed IT departments. So, they had to consolidate their IT departments as part of this activity.

On top of all that, they were given the challenge to reduce their headcount significantly due to the economic crisis. So, it became a very urgent journey for this client to go on, and they began going through that. Their goal was, as I said, reducing IT, improving agility, being able to respond to change, and doing a lot more with a lot less people in a consolidated manner.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.



As they went through their transformation, they went through the whole thing. They assessed what they had. They put their strategy together and where they wanted to go. They figured out what applications they needed and how they were going to operate.

They optimized the road map for them to reach their future state, established a governance program to keep everything in alignment while they went on this journey, and then they executed this journey.

They used a variety of methods for modernizing their applications and migrating over to the lower cost platforms. Some of them they re-architected into new service-based models to provide services to their students and teachers through the web.

At the end they ended up seeing a 2X productivity improvement and return on investment (ROI) in less than 18 months. They reduced their app support by over 30 percent and they reduced their new development cost by close to 40 percent.

Those are significant challenges that the CIO took on, and the combination of improving their applications and infrastructure through outsourcing and modernization model helped them achieve their goal. The CIO will tell you that they could never have survived all the pressure they were under without going on a journey like this.

Gardner: Shawna, do we have a third example?

No particular order

Rudd: This is an example, not naming a specific client, but also making another point, that the things we're talking about don't have to occur in this particular order -- this one, two, three step order.

I know of other clients for whom we've saved around 20 percent by outsourcing their mainframe environments. Then, after successfully completing the transition of those management responsibilities, we've been able to further reduce their cost by another 20 percent simply by identifying opportunities for code optimization. This was duplicate code that was able to be eliminated or dead code, or runtime inefficiency that enabled us to reduce the number of apps that they required to manage their business. They reduced the associated software cost, support cost, etc.

Then there were other clients for whom it made more sense for us to consider outsourcing after the completion of their modernization or migration activities. Maybe they already had modernization and migration efforts underway or they had some on the road map that were going to be completed fairly quickly. It made more sense to outsource as a final step of cost reduction, as opposed to an upfront step that would help generate some funding for those modernization efforts.

Gardner: For those folks who see the need in their organization and understand the rationale behind these various steps, where do they get started, how can they find more information? Let me start with you, Doug. Are the information resources easily available.

Oathout: Well, Dana, there are a ton of different places to start. There's your HP reseller, the HP website, and HP Services. If a customer is thinking about embarking on this journey, I'd contact HP Services and have them come out and do a consulting engagement or an assessment to lay out the steps required.

If you're embarking on the journey on modernization, contact your HP reseller and HP seller and have them come show you how to do consolidation and virtualization to really modernize your infrastructure. If you're having the conversation about applications, contact HP Services. They can look at your application portfolio and show you the experience that they have in modernizing those applications or migrating those applications to modern equipment.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way.



Gardner: Any additional paths to how to start from your perspective, Larry?

Acklin: Let me add to that. If you're in situation where you think modernization, but you're not positive, you're still trying to get a good understanding of what's involved, go on one of these trainings. We offer something that's called the Modernization Transformation Experience Workshop. It's basically a one-day activity workshop, a slide-free environment, where we bring you and take you through the whole journey that you'll go on.

We'll cover everything from how to figure out what you have, what you are planning, how to build the road map for getting into the future state, as well as all the different ways that will impact your business and enterprise along the way, whether you are talking technology infrastructure, architecture, applications, business processes, or even the change management of how it impact your people.

We go through that entire journey through this workshop. So you come out understanding what's you're getting yourself into and how it can really affect you as you go forward. But, that's not the only starting point. You can also jump into this modernization journey at any point in the space.

Maybe, for example, you've already figured out that you needed to do this, maybe you've tried some things on your own in the past, but really need to get external help. We have assessment activities that allow us to jump in at any point along this journey.

Whether it's to help you see where there are code vulnerabilities within your existing applications that visually show you what those things look like and where opportunities are for modernization, or whether it's to do a full assessment of your environment and figure out how your apps and your infrastructure are working for your business or, in most cases not working for your business, it allows you to jump in at any stage throughout that whole journey.

As Doug mentioned, HP can help you figure out the right place for beginning that journey. We have hundreds of modernization experts globally who can help you figure out where to start.

Gardner: Do we have any other closing thoughts on the process of getting started?

Acklin: Let me just mention one other item. We talked about this cost of doing nothing. Don't let any fears or doubts about this journey stop you from beginning the journey. There are many things that can get you in trouble with that cost of doing nothing. That time is coming for you, when you're not going to be able to make those changes. So, don't let those fears stop you from going on that journey.

An example of this is financial. Many of our clients we talk to, don’t know how they would pay for a journey like this. Actually, you have a lot of options right in front of you that you can take advantage of. Our modernization consultants can give you some good methods on how to cover this, how to put things together like these three phase activities, or how to go on these journeys that can still work for you even in tough financial times.

Gardner: Great. We've been talking about improving overall data-center productivity by leveraging available sourcing options as well moving to modernized applications and infrastructure. I want to thank our guest for today's panel. We've been here with Shawna Rudd, Product Marketing Manager for Data Center Services at HP. Thank you, Shawna.

Rudd: Thank you.

Gardner: And Larry Acklin, Product Marketing Manager for Application Modernization Services at HP. Thank you, Larry.

Acklin: Thank you.

Gardner: And Doug Oathout, Vice President of Converged Infrastructure at HP Enterprise Services. Thanks, Doug.

Oathout: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast on making the journey to improved data-center operations via modernization and creative sourcing in tandem. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: