Transcript of BriefingsDirect podcast with Hewlett-Packard's John Bennett on virtualization and its role in the enterprise.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast interview about rethinking virtualization. [See news from HP on virtualization, panel discussion, IDC white paper.]
Virtualization in information technology (IT) has become a very hot topic in the last several years, and we're approaching a tipping point in the market, where virtualization's adoption and acceptance is really rampant, and it's offering some significant benefits in terms of cost and performance.
So, we're going to talk about rethinking multiple tiers of virtualization for applications, infrastructure, desktop, and even some other types of uses.
We're also going to look at this through the lens of a contextual approach to virtualization, not simply a tactical standalone benefit, but in the context of larger IT transformation activities. These include application modernization, data center consolidation, next generation datacenter (NGDC) architectures, energy considerations, and of course, trying to reduce the total cost of IT as a percent of revenues for large organizations.
To help us sort through these issues of context and strategy for virtualization, as well as to look at a number of initiatives that Hewlett-Packard (HP) is now embarking upon, we're joined by John Bennett. John is the worldwide director of data center transformation solutions, and also the HP's Technology Solutions Group (TSG) lead for virtualization. Welcome to the show, John.
John Bennett: Thank you very much, Dana. It's a pleasure to be with you today.
Gardner: When we talk about virtualization as a red-hot trend, just how big a deal is virtualization in the IT market right now?
Bennett: Virtualization is certainly one of the major technology-oriented trends that we see in the industry right now, although I'm always reminded that virtualization isn't new. It's been available as a technology going back 30-40 years.
We see a great deal of excitement, especially around server virtualization, but it's being extended to many other areas as well. We see large numbers of customers, certainly well over half, who have actively deployed virtualization projects. We seem to be at a tipping point in terms of everyone doing it, wanting to do it, or wanting to do even more.
Gardner: Are they doing this on a piecemeal basis, on a tactical basis? Is it being done properly in the methodological framework across the board? What sort of a market trend are we looking at in terms of adoption pattern?
Bennett: In terms of adoption patterns, especially for x86 server virtualization, we see virtualization being driven more as tactical or specific types of IT projects. It's not uncommon to see customers starting out, either to just reduce costs, to improve the efficiency in utilization of the assets they have, or using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But, it's very much focused around IT projects and IT benefits.
The interesting thing is that as customers get engaged in these projects, their eyes start to open up in terms of what else they can do with virtualization. For customers who've already done some virtualization work, they realize their interesting manageability and flexibility options for IT. "I can provision servers or server assets more quickly. I can be a little more responsive to the needs of the business. I can do things a little more quickly than I could before." And, those clearly have benefits to IT with the right value to the business.
Then, they start to see that there are interesting benefits around availability, being able to reduce or eliminate planned downtime, and also to respond much more quickly and expeditiously to unplanned downtime. That then lends itself to the conversation around disaster recovery, and into business continuity, if not continuous computing and disaster tolerance.
It's a very interesting evolution of things with increasing value to the business, but it's very much stepwise, and today tends to be focused around IT benefits. We think that's kind of missing the opportunity.
Gardner: So we've been on this evolution. As you say, virtual machines, hypervisors, this approach to isolating activity at a level above the actual metal, the binaries, has been around for some time. Why is this catching on now? Is it not just economics, and, if you're talking about a business outcome, why are they important and why is virtualization being applied to them now?
Bennett: It really did start with economics, but the real business value to virtualization comes in many other areas that are much more critically important to the business.
One of the first is having an IT organization that is able to respond to dynamically changing needs in real-time, increasing demands for particular applications or business services, being able to throw additional capacity very quickly where it's needed, whether that's driven by seasonal factors or whether it's driven by just systemic growth in the business.
We see people looking at virtualization to improve the organization's ability to roll out new applications in business services much more quickly. We also see that they're gaining some real value in terms of agility and flexibility in having an IT organization that can be highly responsive to whatever is going on in the business, short term and long term.
We also see, as I highlighted earlier, that it really does connect into business continuity, which we see in many of the market research surveys we do year after year. It continues to be a top-of-mind concern for CEOs and CIOs alike.
Gardner: Perhaps we're at this point in time where IT has become so essential to so many aspects of how businesses operate -- the ability of making IT dynamic and responsive, of having redundancy and failover and many of the mission critical aspects that we expect of perhaps certain transactional systems -- we're now able to extend almost anything we do with IT, if it's being supported to a virtualized environment.
Bennett: Well, it's actually being supported through any environment, and it's why we at HP have such a strong focus on business technology. There are very few modern enterprises, whether they're private or public entities, that really could address their mission and business goals without IT.
It is just a completely fundamental fabric of the business today. And, having that environment be responsive, protected, reliable, and delivering quality of service are key attributes of that environment.
This is why we see, next-generation data centers and adaptive infrastructure from HP as being key to that, and it's why we speak about the idea of data center transformation. If that's the IT environment you want, including virtualization, how do you get there from wherever you are at?
Gardner: I suppose it's also important to point out that we're not just talking about virtualization, but we're also talking about mixtures, where there are going to be plenty of physical infrastructure and technology in place, but increasingly virtualized instances here and there. I suppose it's managing these together that is the most important discussion at this point in time.
Bennett: It certainly is one of the more topical points right now. What we see is that customers start to deploy virtualization more broadly, and, as they want to run more and more of their applications in virtualized environments, two challenges arise.
One of them is want of diversity. Customers are accustomed to diversity of the infrastructure, but now we have diversity with the virtualization base. The virtual machines that they're using, the number of suppliers involved in it, and diversity brings complexity, and complexity brings increased risk usually.
One aspect of that diversity is controlling management. You have different virtual machines, each with their own management tools and paradigms. How do you manage from an applications, quality of service, or service-level agreement point of view, across physical and virtual resources and infrastructure that are being used to deliver those services to the business? And, how do you deal with managing physical infrastructure from different manufacturers, and virtual resources from different manufacturers.
Another complication that comes up from a control and management point of view, as customers strive to use virtualization more pervasively in the data center, is that you have to deal with the skill sets of the people you have in the IT organization, as well as the resources available to you to help implement these projects.
One of the virtues of looking at virtualization more comprehensively is that you're actually able to free up resources to focus more on business services and business priorities and less on management and maintenance.
If you look at virtualization more strategically, you say it's not just the servers, but it's my storage and network environment around them. It's my management tools and processes. It's how I do everything together.
When I look at it comprehensively, I not only have a very clean set of controls and procedures in place for running and managing the data center, but now I have the opportunity to start making significant shifts in resources, away from management and maintenance and into business priorities and growth.
Gardner: So we are faced with a potential tipping point, but I suppose that also brings about a new level of risk, because you're moving from a tactical implementation into a variety of implementations, at the application, the infrastructure, and the server levels. Increasingly we're seeing interest in desktop virtualization, but we're also seeing a mixture of suppliers and technologies, and we're also seeing this in the context of other initiatives, with the goal being transformation.
It seems that if you don't do this all properly with some sort of a framework, or at least a conscious approach of managing this from beginning to end in a lifecycle mentality, there could be some serious pitfalls. You could actually stumble and subvert those benefits that you're looking to enjoy.
Bennett: Yes, we see both pitfalls, i.e., problems that arise from not taking a comprehensive approach, and we see missed opportunities, which is probably the bigger loss for an organization. They could see what the potential of virtualization was, but they weren't able to realize it, because their implementation path didn't take into account everything they had to in order to be successful.
This is where we introduce the idea of rethinking virtualization, and we describe it as rethinking virtualization in business terms. It means looking at maximizing your business impact first by taking a business view of virtualization. Then, it maximizes the IT impact by taking a comprehensive view of virtualization in the data center. Then, it maximizes the value to the organization by leveraging virtualization for client implementations, where it makes sense.
But, it's always driven from a business perspective -- what is the benefit to the business, both quantitative and qualitative -- and then drilling down. It's like unpeeling an onion, if I can borrow the analogy from the "Shrek" movie. You go from, "Okay, I have this business service. This business service is delivered through virtual and physical resources, which means I need management in control and governance of both physical and virtual resources."
And then, underneath that I want to be able to go from insight and control, into management and execution. I want to be able to drill down from the business processes and the business service management and automation tools into the infrastructure management, which in turn drills down into the infrastructure itself.
Is the infrastructure designed to be run and operated in a virtualized environment? Is it designed to be managed from an energy control point of view for example? Is it designed to be able to move virtual resources from one physical server to another, without requiring an army of people?
So, part of the onus is on HP in this case to make sure that we're integrating and implementing support for virtualization into all of the components in the data center, so that it works and we can take advantage of it. But, it's up to the customer also to take this business and data center view of virtualization and look at it from an integration point of view.
If you do virtualization as point projects, what we've seen is that you end up with management tools and processes that are outside of the domain of the historical investments you've made, whether it's an IT service management in Information Technology Infrastructure Library (ITIL), or in business service management.
We see virtual environments that are disconnected from the insight and controls and governance and policy procedures put in for IT. This means that if something happens at a business-services level, you don't quite know how to go about fixing it, because you can't locate it. That's why you really want to take this integrated view from a business-service's point of view, from an infrastructure and infrastructure management point of view, and also in terms of your client architectures.
Gardner: Now, as we are rethinking infrastructure, and in the context of virtualization, we also are looking for these business outcomes. Are we at the point yet where the business leaders are saying, "We need virtualization?" Have they connected the dots yet, or do they just know what they want from business outcomes and really don't care whether this virtualization gets there or not?
Bennett: From a business leader's point of view, they don't care about virtualization. Whether it's a CEO, a line of business manager, even a CFO their focus is on, "What are the business priorities? What is our strategy for this business or this organization? Are we going to be trying to grow the business organically, grow it through acquisitions, are we going to be driving a lot of product or service innovation? I need an organization that is going to be responsive to rolling those out."
And, of course, there is always the pressures to reduce cost and reduce risk that apply throughout the business, including to IT. They will not tell you that virtualization is what you have to do if you're in IT.
IT wants to deliver these kinds of benefits, to be able to do things quickly, to be able to dynamically put resources where they're needed, mitigate the risks in the data center environment, whether the risk is related to power and cooling, to the capacity of individual server and its ability to support a particular application, or the risk associated with people and processes that can cause downtime.
Those are IT projects, and for IT, virtualization is a fascinating technology, which allows them to address multiple sets of data center issues and provide the benefits that the business is looking for. It's revolutionary in that sense. It's pretty cool.
Gardner: This is not another "silver bullet," is it? We're really talking about something that's fundamental and that is transformative.
Bennett: Oh, absolutely. We believe that virtualization is a very important attribute of an NGDC. It's been an instrumental part of our adaptive infrastructure, which defines our view of an NGDC for a quite a while, and we see virtualization projects as core to successful transformational initiatives as well.
Gardner: Nowadays, and actually for several years, incremental improvements in IT don't get the funding or the attention. We really need some dramatic improvements in order to get the investment and the move through the inertia. Even on the tactical level, what sort of benefits are some organizations that you're familiar with enjoying and getting returns on from their virtualization activities?
Bennett: It really depends on whether you're looking at it from a business point of view or from an IT point of view.
Gardner: Let's look at it both ways.
Bennett: From an IT point of view, it's clear that they can decrease capital costs, and they can decrease operating expenditure (OPEX) costs associated with depreciation of assets by getting much better utilization of the assets they have. They can either get rid of excess equipment or, as they do modernization projects, they can acquire less infrastructure to run the environment, when they have it effectively virtualized.
When they blend it with integrated management, they can manage the physical and virtual resources together and build an IT environment that really supports the dynamics of virtualization.
They can lower the cost of IT operations implicitly by reducing the complexity of it and explicitly by having very standardized and simple procedures covering virtual and physical resources, which in conjunction with the other cost savings, frees up people to work on other projects and activities. Those all also contribute to reduce costs for the business, although they are secondary effects in many cases.
We see customers being able to improve the quality of service. They're able to virtually eliminate unplanned downtime, especially where it's associated with the base hardware or with the operating environments themselves. They're able to reduce unplanned downtime, because if you have an incident, you are not stuck to a particular server and trying to get it back up and running. You can restart the image on another device, on another virtual machine, restore those services, and then you have the time to deal with the diagnosis and repair at your convenience. It's a much saner environment for IT.
Gardner: Do you have any examples of companies that have moved through this sufficiently that they can look back and determine a return on investment (ROI) or total cost of ownership (TCO) benefit, and what sort of metrics are we seeing in those cases?
Bennett: Certainly, we are looking to do ROI types of benefits. I must confess I don't have any that are explicitly quantified, but we do have customers able to articulate some of the tangible and intangible benefits. One good example is Mitel Corporation, which went through a project of infrastructure modernization, but especially virtualization, in order to address the business needs. They had to both reduce costs and be more responsive to the business.
They were able to drive about $300,000 annually out of their IT budget. That's a significant amount, because they are organized by individual business units there. I love the quote from the datacenter manager with regard to the relationship with the business, "We can now just say, yes."
So it addresses that flexibility and agility type of question. If you get engaged in larger transformational projects, where virtualization is the key element, we have Alcatel-Lucent, which is expecting to reduce their IT operational cost by 25 percent from virtualization and other transformational projects.
In the case of HP IT, we actually have reduced our operational costs by 50 percent, and virtualization is very much a key factor in being able to do that. It can't take full credit, of course, because it was part of a larger set of transformational projects. But, it was absolutely critical to both lowering costs, improving quality of service, improving business continuity, and especially helping the organization to be much more flexible and agile to meeting changing needs.
Gardner: And, as you pointed out earlier, we are able to shift the ratio from ongoing maintenance and support costs into the ability for innovation, new systems, new approaches, investments, and productivity.
Bennett: Absolutely. We see a large number of customers spending less than 30 percent of their IT budget on business priorities, and growth initiatives, and 70 percent or more on management and maintenance. With virtualization and with these broader transformational initiative, you can really flip the ratio around. HP has gone to, I think, 80-20, and I know that that's an area that Alcatel-Lucent has also focused on changing substantively.
Gardner: When you say 80-20, you mean 80 percent for new initiatives?
Bennett: Yes, 80 percent for new initiatives in business priorities, and 20 percent on management and maintenance.
Gardner: That is significant.
Bennett: Yeah.
Gardner: Well, obviously HP has been rethinking virtualization. Of course, it has been rethinking infrastructure as well for some time, given its NGDC activities and some of the things that it has done internally in terms of reducing the number of data centers, and reducing the number of applications. More than that, HP has new go-to-market initiatives in its Sept. 2 announcements.
Can you run through some important aspects of these announcements, and tell us which ones will help people understand the rethinking of virtualization, the strategic approach to virtualization, and also the business outcomes that they should be enjoying from virtualization?
Bennett: Certainly. The first thing that I would like to highlight is that all of the products and services that we are announcing reflect the fact that we are not just encouraging customers to rethink virtualization, but that we at HP have as well. In particular, we realize that it's critically important that the products and hardware, software, and services that we provide need to embrace the virtual and the physical world together.
The customers are going to be able to implement this successfully. They need the expertise. They need the products that will actually let them do it, and that's a lot of what this set of announcements in September is about.
It starts with a new HP ProLiant BL495c virtualization blade, which is really designed and optimized for the virtualization environment. What we have seen impacting the ability of servers to effectively increase the number of virtual machines they can support or support growth of virtual machines is not so much CPU power as it is memory, network bandwidth, and connectivity.
So, we have doubled the memory capacity of the environment, and we have increased the number of network connections that are possible on a single blade, and that will provide much more headroom for these kind of customer's environments at the infrastructure level.
At the business service management level, what we are introducing is a number of enhancements to the HP software portfolio for business service management and automation. The tools that we provide in what is today the industry's leading portfolio for business service management and automation work with both physical resources for insight and control and management and governance purposes, and also with virtual resources, supporting applications and services being delivered through virtual machines provided by VMware, Citrix, or Microsoft.
This is the first wave of announcements from HP software, basically building in integrated and comprehensive support for the virtual environment, as well as the physical environment. That's complemented by new services capabilities. We recognized that not everyone wants to have custom service projects, or expertise helping them with virtualization.
We have some new services that are much more tactically and specifically oriented. They very clearly articulate what the outcomes of the project are, what the time frames are, and also what the costs project are.
We're augmenting our capabilities. I think we are the leading platform provider for all of the key virtualization vendors out there. We are also the leading training vendor for virtualization and we are announcing new offerings in both of those portfolios -- for support and for education services around these virtual environments and virtual capabilities.
Integrated support is really key. When customers experience difficulties in their data centers with a business service or application running in the environment, they don't want finger pointing across vendors. Since we are able to support the virtual machine software, as well as our operating environments, including Microsoft, and of course, the HP servers underneath them, we can provide an integrated approach to dealing with corrective issues and get them fixed on the customer's behalf.
On the desktop side, we have had a portfolio of virtual desktop infrastructure (VDI) services in place already for VMware. We are announcing a new set of capabilities there for Citrix XenDesktop, both for products and for services for client virtualization. Just as important, the work we are doing in those offerings also lays the foundation for supporting Microsoft's Hyper-V when that becomes available in the marketplace as well.
In addition to those capabilities, we have a new storage offering. If you look at the architecture of the data center, you clearly need to move away from direct-attached storage, and move to network shared storage.
We have a new product that integrates our enterprise virtual array, which is a leading self optimizing storage solution for virtualized environments, with PolyServe NAS, which augments the virtualized environment with a clustered file system, making it easy for customers to move to a network attached or a shared storage model, as they make virtualization a more foundational technology in the data center.
Then, in addition to the investments in the data center environment itself, we are announcing a new family of thin clients, some new blade workstations, which articulates the point of that, when it comes to client virtualization, it's really key to have a portfolio of desktop choices, so that customers can get the right solution on the right desktop.
In many cases. it might be thin clients, but in other cases, it might be blade PCs or workstations, depending on the end user's needs. We support all of those, and we support them in different kind of environments.
We also recognize that even if thin clients meet most of the functional needs, people sometimes still want to have either a strong multimedia or 3D performance on some of these desktops. We're announcing a new remote-graphic software offering, which will allow customers to provide a rich multimedia or 3D experience, even to a client environment not equipped with that hardware.
This is the first wave, if you will, of announcements that we are making. It builds on some announcements we made last March, especially with Insight Dynamics VSE for infrastructure management, as well as the blades announcements with an Integrated Lights-Out (ILO) capabilities that we announced a year or so ago. So, we're continuing to build out this portfolio to make this real for customers, and provide the foundation for them to really exploit virtualization for their business benefits.
Gardner: You mentioned VDI, and for those folks who might not be too familiar with desktop virtualization, what we are talking about is bringing back onto the servers the whole presentation of the entire desktop, not just an application or two, but the entire experience.
Therefore, every time a user starts up a client device, they are actually getting a fresh new instance of the operating system, which means it could be updated, it could be patched, it could be serviced entirely without the impacting the client device. There are a number of other benefits to desktop virtualization. Is there anything I've missed in terms of those people who are just getting their feet wet around desktop virtualization?
Bennett: Well, the real driver for desktop virtualization initiatives that we see are organizational concerns around management and security of the client environment. You articulated that what they get is a nice, clean desktop environment whenever they start up the PC. What you didn't say was that it's not uncommon for end users to visit sites they shouldn't have, or open mail they shouldn't have, and get their environment infected by spyware, malware, viruses or anything else.
Client virtualization solutions can really give you a strong handle on management and security, and reducing the cost of both, while increasing the control of both. Also, in environments where customers have a lot of knowledge workers, one goal for corporate risk protection is the protection of end-user data that's on the desktop. In a client virtualization environment, you are able to do much better control of protecting end user data. And/or by the way, if the end users move around, either to different offices or different locations, they still have access to their data no matter where they connect from.
Gardner: So, when we look at virtualization in this larger context, we're seeing that the applications can be virtualized. It's bringing everything back into a server and datacenter infrastructure, but that has a lot of benefits in terms of control, manageability, doing things on a productive level of utilization and continuity.
It's almost going back to the future. Are we, in a sense, enjoying the best of the era 30 or 40 years ago around mainframes and control, with the best of the latest iterations of IT around flexibility, applications and services, Internet, and browser activities? Am I overstating it by saying we get the best of the older and the newer aspects of IT, now that we are doing this contextually?
Bennett: I'm glad that you stated this as combining the best of both worlds. While the world we had 30-40 years ago with mainframes, and indeed with mini computers, was still one of the centralized control of environments that were not necessarily responsive to changing needs of the business, or individual departmental of business unit needs.
That clearly is seen as a great attribute of the modern data center and modern IT environment, but it was an environment that was able to really control, manage, and secure that environment in all the areas that that managed. So, yes, we're combining the two, having the agility and flexibility that people want, having the control and discipline that people want, and also providing access to the innovation that's taking place in the outside world.
We bring them the best of all these worlds, but if a customer is going to realize this, they are not going to get those benefits just by doing server virtualization projects.
Gardner: So, we've seen how virtualization does have an economic benefit. It can bring control, security, and manageability back into a managed approach, a professional approach, for the IT people. At the same time, it's providing some of these business outcomes -- agility, flexibility and responsiveness -- that are so important now in a global economy, and in a fast moving marketplace. Of course, as you mentioned, this is a wave of announcements on Sept. 2, but there are much more to come in a not to distant future.
Bennett: Oh, yes. You will see us continuing to do enhancements and innovation in the infrastructure, server storage, networking, and the input-output fabrics that link them all together. You will see us continuing to innovate and drive more capability and value to the people, whether it's in the support or education side, or in the project and strategy side. You will see us continuing to invest in enhancements in the software portfolio to really provide a comprehensive view of everything going on in the data center.
We continue to be a leading innovator on the client side, both in the devices that sit on the desktop themselves -- whether they are standalone or client virtualization -- and with the software and tools that make client virtualization work. This is just really the first wave of what are the pretty serious investment area for HP.
Gardner: And, clearly has the opportunity to accommodate a lot of the needs of IT, but still give them the opportunity to do that almost important and impossible task, which is to do more for less.
Bennett: We all have that task, and the challenge is how you crack that nut. When we talked about data center transformation last spring, we introduced the concept of “Spend to Save, to Spend to Grow.” The key is finding a way to bootstrap yourself into this, and that's why we look at these things not from a fork lift perspective, because nobody is going to do that frankly, but rather what are the kind of projects you can undertake, and then link the projects together over time for transformational purposes, and also realize benefits from them. So, it becomes self-funding after a while.
An example that I like to use for that is that consolidation is a best practice in data centers today. It's a way of life, but if you really want to significantly change the outcome of some consolidation, which can be substantial, it's worthwhile investing in a virtualization initiative, because when you do that you can consolidate to even less infrastructure.
But before you invest in virtualization, or after you have done it, you might look at investing in an applications modernization project, because the more applications that can be virtualized, the more you can virtualize, the more you consolidate.
So you get savings from the individual projects, but you're kind of multiplying the results together overtime, and that's when it gets really interesting for a customer.
Gardner: So, we have these overlaps, these interdependencies that make it complex, and it needs to be thought through in a total contextual framework, but, as you say, the end result is true transformation.
Bennett: Right, and the easiest way to think it through is from a business perspective. If you look at it from the bottom up, there are so many interconnections and possible paths forward that it's easy to get lost in the weeds. And, if you start popping down, you'd say, "What are the business services I am providing? What are the applications I am running for the business? What are the characteristics that we need to have in place for these, from a business perspective?"
Now, what does that mean in terms of what I do in IT and what I do in the data center? It doesn't make sense to virtualize it or not. If not, carve it aside, manage it on its own. If it does, what am I going to do to effectively both implement virtualization, and manage it from a business perspective? There's a much, much smaller pool of applications business services being provided, than there are of servers and storage devices.
Gardner: Well great, I think we will have to leave it there. We have been talking about rethinking virtualization and putting it in the context of business outcomes, as well as IT transformation. We have been discussing this also in the context of a number of new initiatives and announcements that HP has made, and we have been joined by John Bennett. He is the worldwide director of data center transformation solution, and the HP TSG lead for virtualization. Thank you so much, John, it was very interesting and edifying.
Bennett: Thank you very much, Dana, for this opportunity. We think there is so much promise in virtualization, and we think by rethinking it in business terms, one can maximize the potential for your own organization.
Gardner: Great. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast with Hewlett-Packard's John Bennett on virtualization and its role in the enterprise. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Tuesday, September 02, 2008
HP Experts Portray IT Transformation Vision, Explain New Wave of Virtualization Products and Services
Transcript of BriefingsDirect podcast with Hewlett-Packard on series of Sept. 2 announcements on enterprise virtualization products and services.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast discussion about the growing and important topic of virtualization -- at multiple levels in IT organizations, that is to say, for applications, infrastructure/servers, as well as for clients and desktops.
We're going to talk about services and products in the marketplace, along with the demand and the economic and business payoffs that virtualization is already bringing to many companies. We expect virtualization technologies and techniques to bring even more productivity gains in the near future.
We're going to be discussing Hewlett-Packard’s (HP's) approach to virtualization and a series of announcements that came out on Sept. 2. [See slide show on announcements. See an accompanying interview with John Bennett, virtualization lead at HP Services. See an IDC white paper on business benefits of virtualization.]
We're here with Greg Banfield, consulting manager for the HP Consulting and Integration (C&I) Group infrastructure practice. Welcome to the show, Greg.
Greg Banfield: Thank you very much.
Gardner: Dionne Morgan also joins us. She is the worldwide marketing manager for HP’s Technology Services Group (TSG). Welcome, Dionne.
Dionne Morgan: Thank you.
Gardner: And we have Tom Norton, worldwide practice lead for Microsoft Services at HP. Hello, Tom.
Tom Norton: Hello.
Gardner: Virtualization, of course, has been with us for quite some time. The technologies of virtual machines and hypervisors have been around for a while, but this is really starting to gain ground for a variety of reasons. In many organizations, there are economic reasons, technology reasons, and business outcomes reasons.
People are finding that getting higher utilization is the only part of the story. We're also finding that virtualization is taking place in the context of larger IT undertakings, be it data center consolidation, application modernization, services oriented architecture (SOA), business continuity, and energy savings, just to name a few.
I want to start out by talking with Greg about "why now?" Why are the market and HP focused on virtualization as such a significant development in the market at this point in time?
Banfield: It comes down to a few things. It comes down to our customers asking what HP has done within our own data centers, and how we have done it, because we have gone through the transformation ourselves as a company and have gained a lot of experience around that. It also comes down to a few things in the economics around cost, cost of labor, cost of machines. The price of machines is going down; power is coming up.
They're looking into getting a better handle on using those servers, and using the access they have, and trying to fully utilize them to make sure that the applications that they serve within their company and with their users are fully utilized, and to take advantage of the new servers and technologies that are coming out today.
Gardner: Now, HP is in a unique position in that it has hardware, services, clients, software infrastructure, software management, and partnerships across multiple providers of virtualization technology. This seems to be almost a ready-made business with IT development in the marketplace. Tell us how HP views this opportunity as a company.
Norton: What’s interesting about virtualization is that, as companies have started to work with virtualization, the easy assumption is that you are really reducing the numbers of servers. But, as you expand your knowledge and your experience with virtualization, you start looking at comprehensive components in your environment or your infrastructure.
You start understanding what storage has to do with virtualization. You look at the impact of networks, when you start doing consolidation in virtualization. You start understanding a little bit more about security, for example.
Also, virtualization, in and of itself, is really allowing you to consolidate the sheer number of servers, but you still have the idea that each of those virtual servers needs to be managed. So, you get a better view about the overall impact of device management, as well as virtual machine management.
HP is unique in that ability to be able to understand it from a client perspective, from a server perspective, and, as I mentioned, storage, software, networks. It’s actually a tremendous opportunity for HP to work with our customers to give them an overall strategy of how all of those components work together to deliver the value they are looking for in virtualization, to look at cost, and, as you mentioned earlier, to look at flexibility, security, disaster recovery, and rapid presentation of application. We are in a unique position in the industry to be able to help our customers address all of those issues, which have an impact on virtualization.
Gardner: Virtualization, of course, has been targeted largely at individual server farms or data centers, but, as we are describing it, it really does impact quite a bit across the board for IT. I'm also wondering what the impact is on the business. Let’s go to Dionne. What are the business outcomes, values, or productivity benefits that virtualization supports and underscores and that help the IT people make the case for this investment?
Morgan: One of the key areas is cost reduction. Virtualization can help with major cost savings, and that can include savings in terms of the amount of hardware they purchase, the amount of floor space that’s utilized, the cost of power and cooling. So, it improves the energy efficiency of the environment, as well as just the cost of managing the overall environment.
A lot of customers look to virtualization to help reduce their cost and optimize how they manage the environment. Also, when you optimize the management of the environment, that can also help you accelerate business growth. In addition to cost reduction, customers are also beginning to see value in having more flexibility and agility to address the business demand.
You have this increased agility or ability to accelerate growth. It also helps to mitigate risk, so it’s helping improve the up-time of the environment. It helps address disaster recovery, and business continuity. In general, you can summarize the business outcomes in three areas: cost reduction, risk mitigation, and accelerated business growth.
Gardner: Virtualization also adds complexity. When you've got multiple instances running on single hardware, or hardware is virtualized, there is a management hurdle. Bringing this into play across both the physical and the virtual infrastructure is also another management hurdle. I wonder if any of our panel could help me understand a little bit more about doing this the right way from a management perspective.
Norton: What’s interesting about this is that when you get into a virtualized environment, there's a need to understand the heartbeat of the virtualized environment and understand what’s going on at the hardware level. As you grow up from there with the virtualized machines, you have to understand how the virtual machines themselves are performing, and then there's got to be an understanding of how the actual applications are performing within that virtual machine.
So, management and virtual machine management, overall a comprehensive approach to management, is critical to being successful. One of the other areas that is addressed through management -- as well talked about a lot -- is virtual machines sprawl.
Organizations have gone into virtualization with the hope of reducing the number of servers that they manage in their environment. They turn around with a reduced number of physical devices, but they actually end up with more servers. The control of virtual machines is less difficult, which is a good thing, because you have more flexibility. It also can become a burden, because you can quickly lose control of the sheer number of servers, and, the work that goes into managing those servers during patches, upgrades, and the security issues that go along with it.
So, virtual machine management is actually the key contributor to all this. When you think in terms of that, you really have to think in terms of both the actual management of the machine, the physical device, and understanding the utilization of a processor, the health of the computer itself, and then understanding the health of the virtual machines that sit up on top of it.
HP has a unique ability, because we've been working with virtualization since the 1990s. We've been working with virtualizing and understanding the physical nature of the devices for years, and our engineering groups now have invested a lot of time in working with our partners -- VMware, Microsoft, and Citrix -- to understand their virtual machine management and how our tools and their tools can work together and become integrated to provide that comprehensive view that is required now to really properly manage virtual machines.
Gardner: And, we're talking about a heterogeneous environment from the start with this. According to analyst reports, some 80 percent of enterprises are using virtualization on multiple platforms, with half using three or more platforms. So, this really becomes a critical management issue from that perspective.
Let’s go to Dionne and talk about what HP is calling a rethinking of infrastructure. We've talked about the paybacks as an economic incentive, an agility incentive. Organizations can use virtualization to support and augment some of their ongoing work towards consolidation, unification, modernization, part of the IT transformation in a long-term trend, but you are thinking that this is also a milestone at this point by rethinking infrastructure. I wonder if you could help us understand what you mean by that.
Morgan: Organizations need to think not only about their servers, their storage, and their network for the virtualization perspective, but to look at this from an integrated perspective and have an integrated management view of the data center. It’s not just about the technology. They also have to think about this in terms of the people, the processes, and the technology.
Tom was describing how we can help manage the physical and the virtual. In addition to that, we also need to look at how we manage the ongoing processes, which are going to be responsible for "operationalizing" the virtual environment. This could include the adoption of key industry best practices and standards.
Some best practices that come to mind, are what come from the Information Technology Infrastructure Library (ITIL), how you actually use these ITIL processes, and how you take it a step further and automate some of those processes. It’s an industry best practice for managing services that you deliver to the business.
It’s very important to look at the technology, both the physical and the virtual, the processes required to manage, the automation of processes to manage the virtual environment, and also the people within your organization, ensuring that they have the right skill sets and the right information to utilize and take advantage of this virtualization investment.
Gardner: Let’s take that point about personnel to Greg. Tell us about what the skill sets are? It sounds like this is a bit different. Is there training and the ability to bring your staff or IT operation staff up to spec on this? Is there a too much of a demand in the field for people and personnel with experience? What’s the outlook for the human resources aspects of virtualization?
Banfield: That’s a great question. One thing we have heard from the consulting side is that people understand, customers understand, and CIOs understand about the cost savings and those types of things.
What they are asking us, when we go and do these things, is "I understand we are going to save money. I understand my server count is going to go down. What I am struggling with is people and the processes. I have many processes to handle within my infrastructure and people, and I need to get them redeployed or re-energized into other things that will actually generate growth for our company, as opposed to just shepherding servers now as administrators or other types of things."
From that perspective again, ITIL, as we just mentioned, is a great tool that we can use in the processes. From HP’s perspective, our consultants have done this many times in house and with other customers. We bring to the table the know-how from doing this before, doing transportation projects that we can help the customer move from where they are today to where they need to be in a virtual perspective.
It's not from the infrastructure so much, although we can do that, but the bigger piece is how do we get where we are today, with our processes and people, to where we need to be from an infrastructure standpoint in a virtualized world.
So, yes, our folks are trained. We have many certified people on ITIL, virtualization, and our partner certification VMware and Microsoft. It’s a great opportunity for our customers to work with HP. We have a wealth of knowledge, both from a training perspective, from practical know-how, from just doing it before.
Gardner: I think we have a sense of the vision here, the promise, and also some of the challenges. So, HP on September 2 came out with a number of announcements, some methodologies. We are looking at virtualization from a strategy perspective, design perspective, the transition and integration basis, and then ongoing improvement and return on investments (ROI). Let’s look at the first two, strategy and design. What we are talking about in terms of the September 2 announcements on virtualization in regard to strategy and design?
Norton: Strategy is becoming even more important. Our customers are very aware, as everyone else is now, that they have many options available to them as far as virtualization, not only from a perspective of what to virtualize in their environment, but also from a number of partners and technology suppliers who have different views or different technologies to support virtualization.
Our customers, from a strategy and design perspective, have looked to us to provide some guidance that says "How can I get an idea of the net effect that virtualization can have in my environment? How can I present that and gain that experience, but at the same time understand my long- term view of where I want to go with virtualization, because there is so much available and there are so many different options? How do I make a logical and sensible first attempt at virtualization, where I can derive some business value quickly, but also match that up against strategy for a long-term vision?"
What we are trying to supply with these new services around virtualization is the idea that we can provide our customers with a strategy and a short-term proof of concept, or short-term, rapid, or accelerator, implementation of virtualization. Whether it's on the desktop side or on the server with Microsoft’s new Hyper-V, to give them that experience, they can have that experience contribute to a long-term vision as far as that long-term infrastructure design.
What we are trying to look for is taking the complexity out of an introduction to virtualization. We're trying to take the complexity out of the long-term vision and planning and give the customers an idea of what their journey looks like, rapidly introduce it, but in the right direction, so they are following their overall vision in gaining their overall business value.
Gardner: It sounds really important to bring all of the numerous aspects of IT that are affected by this onto the same page, under a road map with the same vision, and then get into a lifecycle perspective. Now, once we've got our vision, we have our perspective, and we have got all the people on board, it’s down to brass tacks, and then transition and integration. Greg, what’s in store for the HP community, vis-Ã -vis, this level of the deployment?
Banfield: Then, we would have our HP services consultant in integration come in and work with the customer. We've gone through the design phase and the strategy phase, and now we work with the customer to take what we've got on paper and get it going. Typically, we do something in a phased approach, because we're talking about some very large projects. As we've talked about for last 20 or 25 minutes here, it’s a complex environment that we're dealing with. We're dealing with multiple vendors, multiple business groups, and multiple applications, everything impacting a different thing.
We have the design, so we actually start going. We have solution architects and project management using best practices, working hand-in-hand with the customer to make sure that, as we go through this, and there’s changes involved, we are on track.
Of course, as you go through these projects, you have to keep going back, as Tom was mentioning, to your original strategy and your original design, and keep checkpoints. Are we still meeting the criteria for the business? Are we still taking what we have learned during the first two phases, its implementation, and the transition integration valid?
We keep reassessing, as in any large project we go through or anyone we would do. You validate against your milestones and checkpoints and then make adjustments as needed.
Gardner: And then, Dionne, as you mentioned earlier, the business outcomes are important, and the improvements in ROI come into play. So, it’s not enough just to deploy and sit back and wait for the benefits with virtualization. This is an ongoing process, very dynamic, changeable. I think one needs to tweak and manage their resources to improve that productivity to get that economic return. Can you tell us little bit more about what HP has in mind for this long-term economic value?
Morgan: Once you actually transition your solution into production, you have to look at the ongoing operations and the continual improvement of those services that you are providing back to the business. In terms of the ongoing operations, you have to continue to assess your people's skills and your operational processes.
HP provides services to assist with its ongoing operation to help to increase the stability of the virtualized environment. That includes everything from education courses, to software, technical support services, and hardware support services. We also have proactive services, which are really focused on that continual improvement phase of the lifecycle.
On a regular basis, we assess what’s happening in the organization from a people, a process, and a technology perspective. We benchmark against what’s happening in the industry, making recommendations on where a customer can actually improve, on some of those processes to improve efficiency, and to improve on the service level they are providing to the business. We also assist with the implementation of some of those process improvements.
If you look at this from a full lifecycle perspective, HP provides services to assist with everything from strategy, to design, to transition, to the ongoing operations and continual improvement.
Gardner: It was mentioned earlier that HP has gone about a good deal of this virtualization transformation itself. It also worked with some leading-edge customers to deploy and refine. Do we have any metrics, do we have any view into what this means in terms of payback? Is this iterative, minor, 10 percent? What kind of payback typically are we starting to see from a well-planned, well-organized, well-implemented virtualization strategy?
Norton: I don’t know if every company is going to be the same as far as what they may desire to achieve. We've had examples of customers. Greg’s group worked with a financial organization through an accelerator service, in other words going through the whole strategy and discovery phase and trying to measure their environment to look at capacity. They have seen reductions to go from 300 to servers in their environment to 30, at least in the sample of servers that have been evaluated.
That’s just one customer’s example, and everyone could potentially be different, but the idea is just the same. You can look at the number of physical devices and go through an analysis that will look at how these applications can be virtualized and what the utilization of the equipment is. You can have a simple reduction in the number of devices.
HP will also, like our own organization, look at the actual application that’s being virtualized. Maybe it’s not just the case of reducing the number of physical devices and having the same number of servers running. True savings come in when you’ve decided to reduce the number of instances of an application that maybe running on servers. You can add this sense of application virtualization.
The classic example in those cases is an organization that may have 200 remote Microsoft Exchange Servers in their environment. They can look at bringing those distributed remote workstation into a data center environment and find cost savings in administration and data protection. But, there’s still a huge expense in how those Exchange Servers are still sitting on virtual machines. So you still have 40 Exchange Servers and you are still managing each one of those.
Another saving gets involved in that too, where you decide, "I am actually going to reduce the shared number of that 40. I may reduce my Exchange Servers from X number of devices to a quarter of that." Then, you still have those devices that reduce the number of Exchange on servers running within that consolidated environment as well, and that dramatically affects that kind of cost saving.
Cost savings vary, but it can be dramatic. It can be as dramatic as CAPEX expenditures in the hardware base and it can be very dramatic from an application-management perspective or a server management perspective.
Organizations now are looking, like HP did, in both areas, reducing the shared number of physical devices in the data center, and reducing the number of instances of an application that are actually running on servers to provide you even greater benefit.
Gardner: I suppose, generally, what we are able to do now with virtualization is to match supply and demand with much more precision than we could in the past. In the past, we had to throw huge amounts of resources at a problem with brute force and sort of a blunt instrument approach in order to make sure that we could accommodate all sorts of demands and spikes and requirements,
Now, we are able to use virtualization to refine these supply-and-demand equations, so that we can pull resources at the infrastructure level, pull resources at the application level, and reduce a lot of waste and unnecessary or underutilized resource.
Banfield: Another thing that Tom was hitting on, besides this physical savings of the environment with power and air conditioning and things like that, is agility -- agility to market. As Tom was saying, you can now move applications and other things around. Your workforce becomes much more agile to address critical business needs in a very timely manner with virtualization. I think that’s key to our customers.
Gardner: So, if we want to move a whole new set of application to our Asia-Pacific operations and target a whole new set of customers there, the ramp up to doing that is much less time and much more something you can manage, rather than have to forklift upgrade, is that correct?
Banfield: Absolutely.
Gardner: As an analyst I get some questions frequently, and one of them I have to throw out to you guys, because it’s sort of an obvious one. Why would a company that makes a significant amount of money from hardware want to reduce the number of hardware instances? How does that help you, or what is the long-term implications that I am missing?
Norton: What happens is, as you are going to change in a platform, when you move from, say, individual instances of a device that sits in a branch office some place, and it’s maturing and it’s isolated, it’s disconnected in essence, because it’s separate from all the processes that have in the data center.
From a hardware perspective, it’s a great opportunity for HP, not only because we are changing some of these legacy platforms, as they will be sitting out in these remote offices, but we are enabling our customers to actually run on a much more effective and newer platform. It's a much more powerful platform, with direct connectivity to more powerful storage systems, and more powerful networks that run in datacenter.
It’s a plus for both. Our customers gain an advantage, because there are going to be savings overall in how much money they spend on that old equipment, how much maintenance cost they have, how much systems management they need to do for this device to sit out there or even sit in their data center, and have to be supported in much less efficient way.
We can save them money by moving them to more powerful, more efficient platforms. At the same time, it allows us to introduce our customers to these new devices, that provide them a wealth of benefits, from the performance perspective, on security, stability, and high availability. It’s a win for both organizations, along those views.
Gardner: Okay, let’s look at the actual announcements of Sept. 2. I'm going to break out one first, and that’s the desktop virtualization announcements -- virtual desktop infrastructure (VDI) solutions services, using Citrix XenDesktop.
Again, we're looking at a pretty radical shift in the types of end-user devices. We could start using some thin clients, and there is a security and risk reduction opportunity for bringing the data and applications-configuration information onto the server. The end users basically have a seamless environment. They're getting the same desktop and operating system that they’re accustomed to.
There are tremendous opportunities to save costs here. Let’s look before we drill into each of these announcements. Let’s just break out desktop, the virtual desktop and the infrastructure set. Tom, let’s go to you on that first. What’s the big deal here? What are we talking about, when we are going to reduce the amount of actual client-side activity vis-Ã -vis virtualization?
Norton: When our customers sit down and do a study, we help them look at the cost of managing client or end user devices in the field, not only from a help desk study but from a productivity study, from an application presentation viewpoint for the end user, the applications that they use and how they are presented is the heartbeat of business.
The data that they use is so sensitive, and so important to the organization as a whole. When they need help in keeping their productivity up, it can cost money or it can save the organization money. So, you look at changing somebody from a very insecure, volatile device in a remote environment that they use on a daily basis.
Gardner: So a local laptop for example.
Norton: A laptop, right. You can still create that rich experience that they are used to, but give yourself the security of knowing that the data that they are using is protected from theft and also protected, as far as archiving and search availability, from governmental regulations. They can give users some of that rich experience, but still have that protection. You can look at that device and understand the cost and complexity of either upgrading the device, presenting an application, or deploying an application to that device.
It’s extraordinarily expensive to do that and, if they can still get the experience of a more rapid presentation of the applications that they need to their job on a daily basis, both of those are incredibly valuable to the organization.
If you can get those two advantages, you are going to reduce help desk calls from your end user in the case of a disaster. If you have a notebook and it fails, for example, how do you get that person back up in working again, access the data they need, and access the applications they need?
You can accelerate that recovery. You are receiving enormous amounts of management and spending enormous amounts of money on every device every year. You can accelerate recovery and provide them the same rich experience that these new technologies allow us to do.
If you look at a virtual device now, you can say to the end user, "You will get the operating system that you need. You will get the application that you need. And, it will be in the environment that you expect to work in. You have the same user state you have had."
If you can combine all three of those in a virtualized environment, you are actually, in the end, providing more productivity for the end user, and, at the same time, cutting the management cost. You're also enabling yourself to cut other support costs in the organization, like how much money you spend to protect data, how much money you spent to restore data, or protect it from theft.
So there are enormous advantages to both, but it doesn’t work in every instance. If you have remote users who don’t have daily or hourly connectivity back to a host, it may not be to your advantage to use this technology there. But, for most organizations there is certainly a large part of their population that can take advantage of the technology.
Gardner: We've already seen a lot of this in use in some government organizations, particularly in intelligence and military communities, where they can’t take the risk of having an end device being lost or falling into the wrong hands. So, the stateless approach to computing is quite popular and proven there. Isn’t that right?
Norton: Right. You have a public sector, which is very sensitive, but you can imagine the same in terms of healthcare and financial organizations. You can extend that idea.
It may not just be sensitive data. It maybe repetitive tasks or frequent upgrades of applications. You have large segments of users who would have redundant equipment, and they have no need for a rich experience, but they may need an application refreshed on a predictable basis. This allows you to do that gracefully.
Gardner: Once again, this strikes me as aligning supply and demand -- what the end user actually needs in terms of resources, versus having the equivalent 20 years ago with a supercomputer on every desktop.
Norton: That’s correct.
Gardner: Let’s go to these announcements one by one very quickly, so we can give our audience a sense of the breadth and depth of this wave of addressing the virtualization issue. The first is HP Virtualization Accelerator Services. Greg, can you tell us quickly what this means?
Banfield: As we talked about, virtualization is a lifecycle, a journey. HP has Accelerator Services, which are predefined services from the consulting organization for customers to plug-in their module, to plug into where they are within their lifecycle. Because this is a lifecycle, customers could be at any point there, whether design or strategy. Maybe they're just starting, are half way through a project, or maybe towards the end of a virtualization project.
As we talked about, maybe the business outcomes weren’t exactly matched up with the original design, and they need some help in that area. Consulting integration comes with these Accelerator Services to help the customer through those difficult times or any point in the lifecycle to make sure that they are gaining the full value from their virtualization journey.
We can talk about each individual package, if you like, or a service, as we move forward.
Gardner: We'll come back to the services once we get through the major elements.
There are also the VDI services that we just discussed. I’ll just touch with Tom one more time on that. It seems to me also that with desktop virtualization, they were sort of getting the best of the old and the new. The old paradigm was centrally organized and managed, even back in the mini and mainframe days.
There were a lot of benefits to the organization for doing it that way, but the end user didn’t get the flexibility, the innovation, the freedom and flexibility, and so forth. Now, we're able to blend the two of them to get that centralized benefit for operations -- upgrades, maintenance, management, and aligning the supply and resources with the demand of the end user much more efficiently.
At the same time, we're giving users a Microsoft Windows desktop, where they can pick and choose, move, and get a lot of resources still using a browser. Am I off base here or are we really looking at the best of both worlds?
Norton: Absolutely. It does a number of things. Everybody uses PCs at home. The generation we are working with now has grown up with that equipment, so they are very accustomed to having a personalized work environment. They are used to having some flexibility to obtain applications and run them on their own devices.
They are accustomed to performance and also access to data, not having to wait for access, not having to wait for what historically has been a very slow change management process on mainframe based systems to add an application or change an application.
They are used to that agility, to that that high frequency of change. Up until now, many people have been resistant to make that move, because they don’t want to have that rich experience impacted. Now, you get that great benefit of having the rich experience, but, at the same time, you have the ability to take advantage of what consolidation means -- the predictability, disaster recovery, security, those types of developments which you never could get before in a more unpredictable world.
Through VDI, you really get that idea that you have the best of both, from a consolidation perspective as well as distributed to computing perspective.
We feel we are satisfying both ends. When we look at VDI, it’s kind of interesting. It touches both the back end systems and it touches the end user client. Sometimes with virtualization, people just think in terms of that back office, the server room, the datacenter transformation idea.
With VDI now, you have taken it and bridged that gap, to where you can do things on the desktop side, as you mentioned earlier, about taking advantage of thin clients. HP is producing some great thin-client technology. You can extend the life of current hardware, if you wish.
If you're mid-term in the lifecycle of a notebook or desktop, you are not really ready to retire it yet, but you don’t really want to spend a considerable amount of money to upgrade. You can extend the life of that device and make it more useful by combining that device with this type of technology.
At the same time, if you have a high performing device, this gives you the flexibility to just virtualize one application out to that device. So, it gives enormous flexibility on the front side.
On the back side, in the datacenter play, it allows you to do a lot of things. You can take advantage of all the benefits of blade technology, that whole idea we've discussed before about storage, virtualizing storage, and having better access to available storage.
You may run out of storage on a notebook device, but you can request and expand your storage capability on a storage area network (SAN) and go forward from there. So it’s unique in that it can address both the client side, the end-user facing side, and the efficiencies, and predictability, and performance that you want in a datacenter.
Gardner: Part and parcel of virtualization technology is the need for planning and ongoing management and then professional services, methodologies, and best practices. That’s why we have a number of virtualization support services announced as well.
I'll run down the list. HP Virtual Server Environment Solution Service. I assume this is about increasing energy, footprint, and resources.
HP High-Performance Computing Cluster Management Solution Service, HP Integrity/HP9000 Solution Service, HP Server Solution Project Management, HP Virtual Server Solution Planning and Design, HP Global Workload Manager Solution Services, and HP Virtual Desktop Infrastructure Solution Services.
Let’s go to Greg. What are the high points here? We don’t have too much time, so give us sort of an overview what these services involve and how comprehensive they are in terms of the whole series of virtualization opportunities that organizations will face?
Banfield: Because we're talking modular, all the services you mentioned are in a modular or tier fashion. Any one of these can be molded to the customer’s need, whether you have 200 servers or 1,000 servers.
The tiered approach makes it very easy for the customer to pick and choose what they need, depending on where they are in the lifecycle. They are predefined and data-sheeted, so the customer can read what they're going to receive from HP. These seven or eight different services address different points within the life cycle.
All these services do come with project management. Some customers, as I said, are half way through the lifecycle, or on their way, and maybe they just need a little help with project management types of activity. So HP can provide a PMI-certified person to come in there and help them, maybe just work with them to get the project back under control. Maybe it's off a little bit.
So any one of these is a great way for a customer to take a look at our solutions. Again, they are couched to be sort of a quick hit, easy to use. You don't have to just pick one service. If you have different needs, you can say, "I need to take the Virtual Server Environment Solution service, and I need Global Workload Manager to create my entire solution." Again, it's easy for the customer to understand, and then move forward with the project.
Gardner: Let’s look at how you can get started. These announcements are targeted at the U.S. initially, and you are taking it out globally during 2009, which isn’t that far away now. Tell us how an organization can get started and where they can develop this strategic overview of virtualization?
Norton: There are a couple of different ways that customers can engage to get this started. We can engage our customers to get this started first through our traditional sales organization. From a hardware perspective, we have our traditional enterprise account managers and their associated services client principals, the associated services managers who work on those accounts. That’s a very traditional way to engage with our technology teams, who provide these kind of services, both on a support perspective as well as a consulting services perspective.
But, there are other ways as well. You can work directly with our Microsoft Alliance members, you can work with our alliance teams. In this case, you're talking about Microsoft virtualization.
You can do it from a services perspective. Maybe you can take it from an enterprise account perspective, whether hardware or storage. Those channels should not change at all as far as how you would work with HP from a services perspective to get people to come in. If you look at those services, we set them up too. So, it's very easy for our customers to get engaged.
We are presenting services that allow the customer to get engaged, even if it’s a half-day workshop about virtualization. As Greg mentioned, we have strategies that can go two to three days. We have longer term proof of concepts that can go three to four weeks. We try to make it as easy as possible from a services perspective, and also from a sales perspective. We are very flexible.
These can even be introduced through our channels, where these can be sold through a channel and delivered by HP. We are trying to provide flexibility, as well as simplicity, in the services acquisition process for our customers, so that they don’t have to worry about who to talk to. When they need to talk, they can go directly to their traditional HP sales force and get introduced to these services.
Morgan: If I can also just add to that, once our customers began transitioning into production, they also should think about the people and the processes again. From a people perspective, we also announced on Sept. 2 some new education courses, which tie into what Greg and Tom were just describing.
For example, we have an education course on the HP Insight Dynamics BSC software. We have education courses on partition management, and we also announced an HP Virtualization Boot Camp, which covers global workload manager, and virtualization manager, capacity advisor, and a long list of technologies.
Customers should really think about getting their people trained in these technologies. And, from an ongoing operations perspective, we also announced some new software technical support services.
We already provide a lot of support services in the virtualization area, but what we're adding to that, is support for additional VMware products, such as VMware Workstation, VMware Lab Manager, VMware Site Recovery Manager, as well as the operational support for Citrix XenDesktop Server, and, of course, the new HP hardware, such as the HP ProLiant BL495C virtualization blade.
Gardner: Very good. I think we've covered a lot of territory here today, from vision down into actual product and service offerings. Clearly, this is going to be something that companies are going to be dealing with for a long time. We are already seeing forecasts now for virtualization to be growing broadly in the coming year. A 50 percent growth in 2008 alone, and 70 percent just in the previous two years.
So we appreciate everyone’s input, and wish you well on this series of announcements. We have been discussing virtualization at the application desktop and infrastructure server levels, as well as the road map and lifecycle management issues associated with that. We've been joined by Greg Banfield, a consulting manager for Hewlett-Packard, Consulting Integration Infrastructure group. Thank you, Greg.
Banfield: Thank you for having me.
Gardner: Dionne Morgan, worldwide marketing manager for HP’s Technology Services Group. Thank you, Dionne.
Morgan: You're welcome.
Gardner: And Tom Norton, worldwide practice lead for Microsoft Services at HP.
This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast with Hewlett-Packard on series of Sept. 2 announcements on enterprise virtualization products and services. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, a sponsored podcast discussion about the growing and important topic of virtualization -- at multiple levels in IT organizations, that is to say, for applications, infrastructure/servers, as well as for clients and desktops.
We're going to talk about services and products in the marketplace, along with the demand and the economic and business payoffs that virtualization is already bringing to many companies. We expect virtualization technologies and techniques to bring even more productivity gains in the near future.
We're going to be discussing Hewlett-Packard’s (HP's) approach to virtualization and a series of announcements that came out on Sept. 2. [See slide show on announcements. See an accompanying interview with John Bennett, virtualization lead at HP Services. See an IDC white paper on business benefits of virtualization.]
We're here with Greg Banfield, consulting manager for the HP Consulting and Integration (C&I) Group infrastructure practice. Welcome to the show, Greg.
Greg Banfield: Thank you very much.
Gardner: Dionne Morgan also joins us. She is the worldwide marketing manager for HP’s Technology Services Group (TSG). Welcome, Dionne.
Dionne Morgan: Thank you.
Gardner: And we have Tom Norton, worldwide practice lead for Microsoft Services at HP. Hello, Tom.
Tom Norton: Hello.
Gardner: Virtualization, of course, has been with us for quite some time. The technologies of virtual machines and hypervisors have been around for a while, but this is really starting to gain ground for a variety of reasons. In many organizations, there are economic reasons, technology reasons, and business outcomes reasons.
People are finding that getting higher utilization is the only part of the story. We're also finding that virtualization is taking place in the context of larger IT undertakings, be it data center consolidation, application modernization, services oriented architecture (SOA), business continuity, and energy savings, just to name a few.
I want to start out by talking with Greg about "why now?" Why are the market and HP focused on virtualization as such a significant development in the market at this point in time?
Banfield: It comes down to a few things. It comes down to our customers asking what HP has done within our own data centers, and how we have done it, because we have gone through the transformation ourselves as a company and have gained a lot of experience around that. It also comes down to a few things in the economics around cost, cost of labor, cost of machines. The price of machines is going down; power is coming up.
They're looking into getting a better handle on using those servers, and using the access they have, and trying to fully utilize them to make sure that the applications that they serve within their company and with their users are fully utilized, and to take advantage of the new servers and technologies that are coming out today.
Gardner: Now, HP is in a unique position in that it has hardware, services, clients, software infrastructure, software management, and partnerships across multiple providers of virtualization technology. This seems to be almost a ready-made business with IT development in the marketplace. Tell us how HP views this opportunity as a company.
Norton: What’s interesting about virtualization is that, as companies have started to work with virtualization, the easy assumption is that you are really reducing the numbers of servers. But, as you expand your knowledge and your experience with virtualization, you start looking at comprehensive components in your environment or your infrastructure.
You start understanding what storage has to do with virtualization. You look at the impact of networks, when you start doing consolidation in virtualization. You start understanding a little bit more about security, for example.
Also, virtualization, in and of itself, is really allowing you to consolidate the sheer number of servers, but you still have the idea that each of those virtual servers needs to be managed. So, you get a better view about the overall impact of device management, as well as virtual machine management.
HP is unique in that ability to be able to understand it from a client perspective, from a server perspective, and, as I mentioned, storage, software, networks. It’s actually a tremendous opportunity for HP to work with our customers to give them an overall strategy of how all of those components work together to deliver the value they are looking for in virtualization, to look at cost, and, as you mentioned earlier, to look at flexibility, security, disaster recovery, and rapid presentation of application. We are in a unique position in the industry to be able to help our customers address all of those issues, which have an impact on virtualization.
Gardner: Virtualization, of course, has been targeted largely at individual server farms or data centers, but, as we are describing it, it really does impact quite a bit across the board for IT. I'm also wondering what the impact is on the business. Let’s go to Dionne. What are the business outcomes, values, or productivity benefits that virtualization supports and underscores and that help the IT people make the case for this investment?
Morgan: One of the key areas is cost reduction. Virtualization can help with major cost savings, and that can include savings in terms of the amount of hardware they purchase, the amount of floor space that’s utilized, the cost of power and cooling. So, it improves the energy efficiency of the environment, as well as just the cost of managing the overall environment.
A lot of customers look to virtualization to help reduce their cost and optimize how they manage the environment. Also, when you optimize the management of the environment, that can also help you accelerate business growth. In addition to cost reduction, customers are also beginning to see value in having more flexibility and agility to address the business demand.
You have this increased agility or ability to accelerate growth. It also helps to mitigate risk, so it’s helping improve the up-time of the environment. It helps address disaster recovery, and business continuity. In general, you can summarize the business outcomes in three areas: cost reduction, risk mitigation, and accelerated business growth.
Gardner: Virtualization also adds complexity. When you've got multiple instances running on single hardware, or hardware is virtualized, there is a management hurdle. Bringing this into play across both the physical and the virtual infrastructure is also another management hurdle. I wonder if any of our panel could help me understand a little bit more about doing this the right way from a management perspective.
Norton: What’s interesting about this is that when you get into a virtualized environment, there's a need to understand the heartbeat of the virtualized environment and understand what’s going on at the hardware level. As you grow up from there with the virtualized machines, you have to understand how the virtual machines themselves are performing, and then there's got to be an understanding of how the actual applications are performing within that virtual machine.
So, management and virtual machine management, overall a comprehensive approach to management, is critical to being successful. One of the other areas that is addressed through management -- as well talked about a lot -- is virtual machines sprawl.
Organizations have gone into virtualization with the hope of reducing the number of servers that they manage in their environment. They turn around with a reduced number of physical devices, but they actually end up with more servers. The control of virtual machines is less difficult, which is a good thing, because you have more flexibility. It also can become a burden, because you can quickly lose control of the sheer number of servers, and, the work that goes into managing those servers during patches, upgrades, and the security issues that go along with it.
So, virtual machine management is actually the key contributor to all this. When you think in terms of that, you really have to think in terms of both the actual management of the machine, the physical device, and understanding the utilization of a processor, the health of the computer itself, and then understanding the health of the virtual machines that sit up on top of it.
HP has a unique ability, because we've been working with virtualization since the 1990s. We've been working with virtualizing and understanding the physical nature of the devices for years, and our engineering groups now have invested a lot of time in working with our partners -- VMware, Microsoft, and Citrix -- to understand their virtual machine management and how our tools and their tools can work together and become integrated to provide that comprehensive view that is required now to really properly manage virtual machines.
Gardner: And, we're talking about a heterogeneous environment from the start with this. According to analyst reports, some 80 percent of enterprises are using virtualization on multiple platforms, with half using three or more platforms. So, this really becomes a critical management issue from that perspective.
Let’s go to Dionne and talk about what HP is calling a rethinking of infrastructure. We've talked about the paybacks as an economic incentive, an agility incentive. Organizations can use virtualization to support and augment some of their ongoing work towards consolidation, unification, modernization, part of the IT transformation in a long-term trend, but you are thinking that this is also a milestone at this point by rethinking infrastructure. I wonder if you could help us understand what you mean by that.
Morgan: Organizations need to think not only about their servers, their storage, and their network for the virtualization perspective, but to look at this from an integrated perspective and have an integrated management view of the data center. It’s not just about the technology. They also have to think about this in terms of the people, the processes, and the technology.
Tom was describing how we can help manage the physical and the virtual. In addition to that, we also need to look at how we manage the ongoing processes, which are going to be responsible for "operationalizing" the virtual environment. This could include the adoption of key industry best practices and standards.
Some best practices that come to mind, are what come from the Information Technology Infrastructure Library (ITIL), how you actually use these ITIL processes, and how you take it a step further and automate some of those processes. It’s an industry best practice for managing services that you deliver to the business.
It’s very important to look at the technology, both the physical and the virtual, the processes required to manage, the automation of processes to manage the virtual environment, and also the people within your organization, ensuring that they have the right skill sets and the right information to utilize and take advantage of this virtualization investment.
Gardner: Let’s take that point about personnel to Greg. Tell us about what the skill sets are? It sounds like this is a bit different. Is there training and the ability to bring your staff or IT operation staff up to spec on this? Is there a too much of a demand in the field for people and personnel with experience? What’s the outlook for the human resources aspects of virtualization?
Banfield: That’s a great question. One thing we have heard from the consulting side is that people understand, customers understand, and CIOs understand about the cost savings and those types of things.
What they are asking us, when we go and do these things, is "I understand we are going to save money. I understand my server count is going to go down. What I am struggling with is people and the processes. I have many processes to handle within my infrastructure and people, and I need to get them redeployed or re-energized into other things that will actually generate growth for our company, as opposed to just shepherding servers now as administrators or other types of things."
From that perspective again, ITIL, as we just mentioned, is a great tool that we can use in the processes. From HP’s perspective, our consultants have done this many times in house and with other customers. We bring to the table the know-how from doing this before, doing transportation projects that we can help the customer move from where they are today to where they need to be in a virtual perspective.
It's not from the infrastructure so much, although we can do that, but the bigger piece is how do we get where we are today, with our processes and people, to where we need to be from an infrastructure standpoint in a virtualized world.
So, yes, our folks are trained. We have many certified people on ITIL, virtualization, and our partner certification VMware and Microsoft. It’s a great opportunity for our customers to work with HP. We have a wealth of knowledge, both from a training perspective, from practical know-how, from just doing it before.
Gardner: I think we have a sense of the vision here, the promise, and also some of the challenges. So, HP on September 2 came out with a number of announcements, some methodologies. We are looking at virtualization from a strategy perspective, design perspective, the transition and integration basis, and then ongoing improvement and return on investments (ROI). Let’s look at the first two, strategy and design. What we are talking about in terms of the September 2 announcements on virtualization in regard to strategy and design?
Norton: Strategy is becoming even more important. Our customers are very aware, as everyone else is now, that they have many options available to them as far as virtualization, not only from a perspective of what to virtualize in their environment, but also from a number of partners and technology suppliers who have different views or different technologies to support virtualization.
Our customers, from a strategy and design perspective, have looked to us to provide some guidance that says "How can I get an idea of the net effect that virtualization can have in my environment? How can I present that and gain that experience, but at the same time understand my long- term view of where I want to go with virtualization, because there is so much available and there are so many different options? How do I make a logical and sensible first attempt at virtualization, where I can derive some business value quickly, but also match that up against strategy for a long-term vision?"
What we are trying to supply with these new services around virtualization is the idea that we can provide our customers with a strategy and a short-term proof of concept, or short-term, rapid, or accelerator, implementation of virtualization. Whether it's on the desktop side or on the server with Microsoft’s new Hyper-V, to give them that experience, they can have that experience contribute to a long-term vision as far as that long-term infrastructure design.
What we are trying to look for is taking the complexity out of an introduction to virtualization. We're trying to take the complexity out of the long-term vision and planning and give the customers an idea of what their journey looks like, rapidly introduce it, but in the right direction, so they are following their overall vision in gaining their overall business value.
Gardner: It sounds really important to bring all of the numerous aspects of IT that are affected by this onto the same page, under a road map with the same vision, and then get into a lifecycle perspective. Now, once we've got our vision, we have our perspective, and we have got all the people on board, it’s down to brass tacks, and then transition and integration. Greg, what’s in store for the HP community, vis-Ã -vis, this level of the deployment?
Banfield: Then, we would have our HP services consultant in integration come in and work with the customer. We've gone through the design phase and the strategy phase, and now we work with the customer to take what we've got on paper and get it going. Typically, we do something in a phased approach, because we're talking about some very large projects. As we've talked about for last 20 or 25 minutes here, it’s a complex environment that we're dealing with. We're dealing with multiple vendors, multiple business groups, and multiple applications, everything impacting a different thing.
We have the design, so we actually start going. We have solution architects and project management using best practices, working hand-in-hand with the customer to make sure that, as we go through this, and there’s changes involved, we are on track.
Of course, as you go through these projects, you have to keep going back, as Tom was mentioning, to your original strategy and your original design, and keep checkpoints. Are we still meeting the criteria for the business? Are we still taking what we have learned during the first two phases, its implementation, and the transition integration valid?
We keep reassessing, as in any large project we go through or anyone we would do. You validate against your milestones and checkpoints and then make adjustments as needed.
Gardner: And then, Dionne, as you mentioned earlier, the business outcomes are important, and the improvements in ROI come into play. So, it’s not enough just to deploy and sit back and wait for the benefits with virtualization. This is an ongoing process, very dynamic, changeable. I think one needs to tweak and manage their resources to improve that productivity to get that economic return. Can you tell us little bit more about what HP has in mind for this long-term economic value?
Morgan: Once you actually transition your solution into production, you have to look at the ongoing operations and the continual improvement of those services that you are providing back to the business. In terms of the ongoing operations, you have to continue to assess your people's skills and your operational processes.
HP provides services to assist with its ongoing operation to help to increase the stability of the virtualized environment. That includes everything from education courses, to software, technical support services, and hardware support services. We also have proactive services, which are really focused on that continual improvement phase of the lifecycle.
On a regular basis, we assess what’s happening in the organization from a people, a process, and a technology perspective. We benchmark against what’s happening in the industry, making recommendations on where a customer can actually improve, on some of those processes to improve efficiency, and to improve on the service level they are providing to the business. We also assist with the implementation of some of those process improvements.
If you look at this from a full lifecycle perspective, HP provides services to assist with everything from strategy, to design, to transition, to the ongoing operations and continual improvement.
Gardner: It was mentioned earlier that HP has gone about a good deal of this virtualization transformation itself. It also worked with some leading-edge customers to deploy and refine. Do we have any metrics, do we have any view into what this means in terms of payback? Is this iterative, minor, 10 percent? What kind of payback typically are we starting to see from a well-planned, well-organized, well-implemented virtualization strategy?
Norton: I don’t know if every company is going to be the same as far as what they may desire to achieve. We've had examples of customers. Greg’s group worked with a financial organization through an accelerator service, in other words going through the whole strategy and discovery phase and trying to measure their environment to look at capacity. They have seen reductions to go from 300 to servers in their environment to 30, at least in the sample of servers that have been evaluated.
That’s just one customer’s example, and everyone could potentially be different, but the idea is just the same. You can look at the number of physical devices and go through an analysis that will look at how these applications can be virtualized and what the utilization of the equipment is. You can have a simple reduction in the number of devices.
HP will also, like our own organization, look at the actual application that’s being virtualized. Maybe it’s not just the case of reducing the number of physical devices and having the same number of servers running. True savings come in when you’ve decided to reduce the number of instances of an application that maybe running on servers. You can add this sense of application virtualization.
The classic example in those cases is an organization that may have 200 remote Microsoft Exchange Servers in their environment. They can look at bringing those distributed remote workstation into a data center environment and find cost savings in administration and data protection. But, there’s still a huge expense in how those Exchange Servers are still sitting on virtual machines. So you still have 40 Exchange Servers and you are still managing each one of those.
Another saving gets involved in that too, where you decide, "I am actually going to reduce the shared number of that 40. I may reduce my Exchange Servers from X number of devices to a quarter of that." Then, you still have those devices that reduce the number of Exchange on servers running within that consolidated environment as well, and that dramatically affects that kind of cost saving.
Cost savings vary, but it can be dramatic. It can be as dramatic as CAPEX expenditures in the hardware base and it can be very dramatic from an application-management perspective or a server management perspective.
Organizations now are looking, like HP did, in both areas, reducing the shared number of physical devices in the data center, and reducing the number of instances of an application that are actually running on servers to provide you even greater benefit.
Gardner: I suppose, generally, what we are able to do now with virtualization is to match supply and demand with much more precision than we could in the past. In the past, we had to throw huge amounts of resources at a problem with brute force and sort of a blunt instrument approach in order to make sure that we could accommodate all sorts of demands and spikes and requirements,
Now, we are able to use virtualization to refine these supply-and-demand equations, so that we can pull resources at the infrastructure level, pull resources at the application level, and reduce a lot of waste and unnecessary or underutilized resource.
Banfield: Another thing that Tom was hitting on, besides this physical savings of the environment with power and air conditioning and things like that, is agility -- agility to market. As Tom was saying, you can now move applications and other things around. Your workforce becomes much more agile to address critical business needs in a very timely manner with virtualization. I think that’s key to our customers.
Gardner: So, if we want to move a whole new set of application to our Asia-Pacific operations and target a whole new set of customers there, the ramp up to doing that is much less time and much more something you can manage, rather than have to forklift upgrade, is that correct?
Banfield: Absolutely.
Gardner: As an analyst I get some questions frequently, and one of them I have to throw out to you guys, because it’s sort of an obvious one. Why would a company that makes a significant amount of money from hardware want to reduce the number of hardware instances? How does that help you, or what is the long-term implications that I am missing?
Norton: What happens is, as you are going to change in a platform, when you move from, say, individual instances of a device that sits in a branch office some place, and it’s maturing and it’s isolated, it’s disconnected in essence, because it’s separate from all the processes that have in the data center.
From a hardware perspective, it’s a great opportunity for HP, not only because we are changing some of these legacy platforms, as they will be sitting out in these remote offices, but we are enabling our customers to actually run on a much more effective and newer platform. It's a much more powerful platform, with direct connectivity to more powerful storage systems, and more powerful networks that run in datacenter.
It’s a plus for both. Our customers gain an advantage, because there are going to be savings overall in how much money they spend on that old equipment, how much maintenance cost they have, how much systems management they need to do for this device to sit out there or even sit in their data center, and have to be supported in much less efficient way.
We can save them money by moving them to more powerful, more efficient platforms. At the same time, it allows us to introduce our customers to these new devices, that provide them a wealth of benefits, from the performance perspective, on security, stability, and high availability. It’s a win for both organizations, along those views.
Gardner: Okay, let’s look at the actual announcements of Sept. 2. I'm going to break out one first, and that’s the desktop virtualization announcements -- virtual desktop infrastructure (VDI) solutions services, using Citrix XenDesktop.
Again, we're looking at a pretty radical shift in the types of end-user devices. We could start using some thin clients, and there is a security and risk reduction opportunity for bringing the data and applications-configuration information onto the server. The end users basically have a seamless environment. They're getting the same desktop and operating system that they’re accustomed to.
There are tremendous opportunities to save costs here. Let’s look before we drill into each of these announcements. Let’s just break out desktop, the virtual desktop and the infrastructure set. Tom, let’s go to you on that first. What’s the big deal here? What are we talking about, when we are going to reduce the amount of actual client-side activity vis-Ã -vis virtualization?
Norton: When our customers sit down and do a study, we help them look at the cost of managing client or end user devices in the field, not only from a help desk study but from a productivity study, from an application presentation viewpoint for the end user, the applications that they use and how they are presented is the heartbeat of business.
The data that they use is so sensitive, and so important to the organization as a whole. When they need help in keeping their productivity up, it can cost money or it can save the organization money. So, you look at changing somebody from a very insecure, volatile device in a remote environment that they use on a daily basis.
Gardner: So a local laptop for example.
Norton: A laptop, right. You can still create that rich experience that they are used to, but give yourself the security of knowing that the data that they are using is protected from theft and also protected, as far as archiving and search availability, from governmental regulations. They can give users some of that rich experience, but still have that protection. You can look at that device and understand the cost and complexity of either upgrading the device, presenting an application, or deploying an application to that device.
It’s extraordinarily expensive to do that and, if they can still get the experience of a more rapid presentation of the applications that they need to their job on a daily basis, both of those are incredibly valuable to the organization.
If you can get those two advantages, you are going to reduce help desk calls from your end user in the case of a disaster. If you have a notebook and it fails, for example, how do you get that person back up in working again, access the data they need, and access the applications they need?
You can accelerate that recovery. You are receiving enormous amounts of management and spending enormous amounts of money on every device every year. You can accelerate recovery and provide them the same rich experience that these new technologies allow us to do.
If you look at a virtual device now, you can say to the end user, "You will get the operating system that you need. You will get the application that you need. And, it will be in the environment that you expect to work in. You have the same user state you have had."
If you can combine all three of those in a virtualized environment, you are actually, in the end, providing more productivity for the end user, and, at the same time, cutting the management cost. You're also enabling yourself to cut other support costs in the organization, like how much money you spend to protect data, how much money you spent to restore data, or protect it from theft.
So there are enormous advantages to both, but it doesn’t work in every instance. If you have remote users who don’t have daily or hourly connectivity back to a host, it may not be to your advantage to use this technology there. But, for most organizations there is certainly a large part of their population that can take advantage of the technology.
Gardner: We've already seen a lot of this in use in some government organizations, particularly in intelligence and military communities, where they can’t take the risk of having an end device being lost or falling into the wrong hands. So, the stateless approach to computing is quite popular and proven there. Isn’t that right?
Norton: Right. You have a public sector, which is very sensitive, but you can imagine the same in terms of healthcare and financial organizations. You can extend that idea.
It may not just be sensitive data. It maybe repetitive tasks or frequent upgrades of applications. You have large segments of users who would have redundant equipment, and they have no need for a rich experience, but they may need an application refreshed on a predictable basis. This allows you to do that gracefully.
Gardner: Once again, this strikes me as aligning supply and demand -- what the end user actually needs in terms of resources, versus having the equivalent 20 years ago with a supercomputer on every desktop.
Norton: That’s correct.
Gardner: Let’s go to these announcements one by one very quickly, so we can give our audience a sense of the breadth and depth of this wave of addressing the virtualization issue. The first is HP Virtualization Accelerator Services. Greg, can you tell us quickly what this means?
Banfield: As we talked about, virtualization is a lifecycle, a journey. HP has Accelerator Services, which are predefined services from the consulting organization for customers to plug-in their module, to plug into where they are within their lifecycle. Because this is a lifecycle, customers could be at any point there, whether design or strategy. Maybe they're just starting, are half way through a project, or maybe towards the end of a virtualization project.
As we talked about, maybe the business outcomes weren’t exactly matched up with the original design, and they need some help in that area. Consulting integration comes with these Accelerator Services to help the customer through those difficult times or any point in the lifecycle to make sure that they are gaining the full value from their virtualization journey.
We can talk about each individual package, if you like, or a service, as we move forward.
Gardner: We'll come back to the services once we get through the major elements.
There are also the VDI services that we just discussed. I’ll just touch with Tom one more time on that. It seems to me also that with desktop virtualization, they were sort of getting the best of the old and the new. The old paradigm was centrally organized and managed, even back in the mini and mainframe days.
There were a lot of benefits to the organization for doing it that way, but the end user didn’t get the flexibility, the innovation, the freedom and flexibility, and so forth. Now, we're able to blend the two of them to get that centralized benefit for operations -- upgrades, maintenance, management, and aligning the supply and resources with the demand of the end user much more efficiently.
At the same time, we're giving users a Microsoft Windows desktop, where they can pick and choose, move, and get a lot of resources still using a browser. Am I off base here or are we really looking at the best of both worlds?
Norton: Absolutely. It does a number of things. Everybody uses PCs at home. The generation we are working with now has grown up with that equipment, so they are very accustomed to having a personalized work environment. They are used to having some flexibility to obtain applications and run them on their own devices.
They are accustomed to performance and also access to data, not having to wait for access, not having to wait for what historically has been a very slow change management process on mainframe based systems to add an application or change an application.
They are used to that agility, to that that high frequency of change. Up until now, many people have been resistant to make that move, because they don’t want to have that rich experience impacted. Now, you get that great benefit of having the rich experience, but, at the same time, you have the ability to take advantage of what consolidation means -- the predictability, disaster recovery, security, those types of developments which you never could get before in a more unpredictable world.
Through VDI, you really get that idea that you have the best of both, from a consolidation perspective as well as distributed to computing perspective.
We feel we are satisfying both ends. When we look at VDI, it’s kind of interesting. It touches both the back end systems and it touches the end user client. Sometimes with virtualization, people just think in terms of that back office, the server room, the datacenter transformation idea.
With VDI now, you have taken it and bridged that gap, to where you can do things on the desktop side, as you mentioned earlier, about taking advantage of thin clients. HP is producing some great thin-client technology. You can extend the life of current hardware, if you wish.
If you're mid-term in the lifecycle of a notebook or desktop, you are not really ready to retire it yet, but you don’t really want to spend a considerable amount of money to upgrade. You can extend the life of that device and make it more useful by combining that device with this type of technology.
At the same time, if you have a high performing device, this gives you the flexibility to just virtualize one application out to that device. So, it gives enormous flexibility on the front side.
On the back side, in the datacenter play, it allows you to do a lot of things. You can take advantage of all the benefits of blade technology, that whole idea we've discussed before about storage, virtualizing storage, and having better access to available storage.
You may run out of storage on a notebook device, but you can request and expand your storage capability on a storage area network (SAN) and go forward from there. So it’s unique in that it can address both the client side, the end-user facing side, and the efficiencies, and predictability, and performance that you want in a datacenter.
Gardner: Part and parcel of virtualization technology is the need for planning and ongoing management and then professional services, methodologies, and best practices. That’s why we have a number of virtualization support services announced as well.
I'll run down the list. HP Virtual Server Environment Solution Service. I assume this is about increasing energy, footprint, and resources.
HP High-Performance Computing Cluster Management Solution Service, HP Integrity/HP9000 Solution Service, HP Server Solution Project Management, HP Virtual Server Solution Planning and Design, HP Global Workload Manager Solution Services, and HP Virtual Desktop Infrastructure Solution Services.
Let’s go to Greg. What are the high points here? We don’t have too much time, so give us sort of an overview what these services involve and how comprehensive they are in terms of the whole series of virtualization opportunities that organizations will face?
Banfield: Because we're talking modular, all the services you mentioned are in a modular or tier fashion. Any one of these can be molded to the customer’s need, whether you have 200 servers or 1,000 servers.
The tiered approach makes it very easy for the customer to pick and choose what they need, depending on where they are in the lifecycle. They are predefined and data-sheeted, so the customer can read what they're going to receive from HP. These seven or eight different services address different points within the life cycle.
All these services do come with project management. Some customers, as I said, are half way through the lifecycle, or on their way, and maybe they just need a little help with project management types of activity. So HP can provide a PMI-certified person to come in there and help them, maybe just work with them to get the project back under control. Maybe it's off a little bit.
So any one of these is a great way for a customer to take a look at our solutions. Again, they are couched to be sort of a quick hit, easy to use. You don't have to just pick one service. If you have different needs, you can say, "I need to take the Virtual Server Environment Solution service, and I need Global Workload Manager to create my entire solution." Again, it's easy for the customer to understand, and then move forward with the project.
Gardner: Let’s look at how you can get started. These announcements are targeted at the U.S. initially, and you are taking it out globally during 2009, which isn’t that far away now. Tell us how an organization can get started and where they can develop this strategic overview of virtualization?
Norton: There are a couple of different ways that customers can engage to get this started. We can engage our customers to get this started first through our traditional sales organization. From a hardware perspective, we have our traditional enterprise account managers and their associated services client principals, the associated services managers who work on those accounts. That’s a very traditional way to engage with our technology teams, who provide these kind of services, both on a support perspective as well as a consulting services perspective.
But, there are other ways as well. You can work directly with our Microsoft Alliance members, you can work with our alliance teams. In this case, you're talking about Microsoft virtualization.
You can do it from a services perspective. Maybe you can take it from an enterprise account perspective, whether hardware or storage. Those channels should not change at all as far as how you would work with HP from a services perspective to get people to come in. If you look at those services, we set them up too. So, it's very easy for our customers to get engaged.
We are presenting services that allow the customer to get engaged, even if it’s a half-day workshop about virtualization. As Greg mentioned, we have strategies that can go two to three days. We have longer term proof of concepts that can go three to four weeks. We try to make it as easy as possible from a services perspective, and also from a sales perspective. We are very flexible.
These can even be introduced through our channels, where these can be sold through a channel and delivered by HP. We are trying to provide flexibility, as well as simplicity, in the services acquisition process for our customers, so that they don’t have to worry about who to talk to. When they need to talk, they can go directly to their traditional HP sales force and get introduced to these services.
Morgan: If I can also just add to that, once our customers began transitioning into production, they also should think about the people and the processes again. From a people perspective, we also announced on Sept. 2 some new education courses, which tie into what Greg and Tom were just describing.
For example, we have an education course on the HP Insight Dynamics BSC software. We have education courses on partition management, and we also announced an HP Virtualization Boot Camp, which covers global workload manager, and virtualization manager, capacity advisor, and a long list of technologies.
Customers should really think about getting their people trained in these technologies. And, from an ongoing operations perspective, we also announced some new software technical support services.
We already provide a lot of support services in the virtualization area, but what we're adding to that, is support for additional VMware products, such as VMware Workstation, VMware Lab Manager, VMware Site Recovery Manager, as well as the operational support for Citrix XenDesktop Server, and, of course, the new HP hardware, such as the HP ProLiant BL495C virtualization blade.
Gardner: Very good. I think we've covered a lot of territory here today, from vision down into actual product and service offerings. Clearly, this is going to be something that companies are going to be dealing with for a long time. We are already seeing forecasts now for virtualization to be growing broadly in the coming year. A 50 percent growth in 2008 alone, and 70 percent just in the previous two years.
So we appreciate everyone’s input, and wish you well on this series of announcements. We have been discussing virtualization at the application desktop and infrastructure server levels, as well as the road map and lifecycle management issues associated with that. We've been joined by Greg Banfield, a consulting manager for Hewlett-Packard, Consulting Integration Infrastructure group. Thank you, Greg.
Banfield: Thank you for having me.
Gardner: Dionne Morgan, worldwide marketing manager for HP’s Technology Services Group. Thank you, Dionne.
Morgan: You're welcome.
Gardner: And Tom Norton, worldwide practice lead for Microsoft Services at HP.
This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast with Hewlett-Packard on series of Sept. 2 announcements on enterprise virtualization products and services. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
Banfield,
clients,
Dana Gardner,
data center,
desktop,
HP,
hypervisor,
Interarbor,
Morgan,
Norton,
SOA,
VDI,
virtual machines,
virtualization
Thursday, August 21, 2008
Pulse Provides Novel Training and Tools Configuration Resource to Aid in Developer Education, Preparedness
Transcript of BriefingsDirect podcast on Java training and education with Genuitec Pulse for Java and Eclipse.
Listen to the podcast. Sponsor: Genuitec.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Today, a sponsored podcast discussion about the complexity around Java training and education. The development toolset, the plug-ins, the community are all very fast-moving targets. It's difficult for trainers, educators -- not to mention the students and budding developers themselves -- to get a full grasp of what's expected of them, and then to find resources that are up-to-date and timely.
We're going to be discussing with some experts how better to organize Java training and education. We're going to look at the Eclipse profiles that can be organized and coordinated using Pulse, a Genuitec-organized tools configuration network function. We're going to learn about how organizations can organize their training, so that students can better anticipate what's expected of them in the real world.
To help us understand some of these issues and work toward some solutions, we're joined by Michael Cote, an analyst with RedMonk. Welcome to the show, Michael.
Michael Cote: Hello, everybody.
Gardner: We're also joined by Todd Williams, vice president of technology at Genuitec. Welcome to the show, Todd.
Todd Williams: Thanks very much, Dana.
Gardner: Lastly, we're joined by Ken Kousen, an independent technical trainer and president of Kousen IT, Inc. He's also an adjunct professor at Rensselaer Polytechnic Institute. Welcome, Ken.
Ken Kousen: Hi, Dana, glad to be here.
Gardner: As I mentioned, complexity and moving targets are part of the problem, but it also seems that there is a disconnect between book knowledge or training knowledge that one gathers about development and what happens in the real world, what goes on with code being checked in and checked out, and how teams are organized. It seems difficult for someone to anticipate what's really going happen.
Let's go to Ken first. Ken, what is the gap, from your perspective, between what students and budding developers get through training and in university settings, and then what's often expected of them in the real world?
Kousen: It's interesting. The gap between what's taught in academia and what's taught in the real world is very large, actually. The classes I teach tend to be in a master's level program, and I teach a couple of classes in developing enterprise applications that are specifically constructed to address this gap.
Academia will talk about abstractions of data structures, algorithms, and different techniques for doing things. Then, when people get into the real world, they have no idea what Spring, Hibernate, or any of the other issues really are.
It's also interesting that a lot of developments in this field tend to flow from the working professionals toward academia, rather than the other way around, which is what you would find in engineering, when I used to be in that area.
Gardner: Todd, when you're doing hiring or are doing development for business, and you're talking to your customers -- folks that use MyEclipse and your services for training and consulting -- are you seeing a worsening situation in terms of how to acquire qualified labor, or do people have a pretty good sense of where to go to find good Java developers?
Williams: Finding quality employees is always a challenge, and probably always will be. Part of what I see as being difficult, especially in the Java and Enterprise Java market, is the huge number of technologies that are being employed at different levels. Each company picks its own type of stack.
Ken mentioned Spring and Hibernate. There is also Java transaction API (JTA), Java server faces (JSF), and Struts, Web framework and persistence technologies, and application servers that are in use. Finding employees that fit with what you are trying to do today, with an eye toward being able to mature them into where you are going tomorrow, is probably going to always be the concern.
Gardner: Now, what's been going on with development, not just the function, but teams, the collaboration, agile types of activities, Scrum? It used to be that people could specialize, stay in one little niche, but now the "master of all trades" seems to be more in demand.
Let's go to Michael. Michael, is development fundamentally changing? When we think of developers, do we need to recast how we imagine them or conceive of them?
Cote: Yes. I think it's fair even to go to the extreme and say absolutely. You look at the employment patterns that most developers find themselves in, and they are not really working at some place three, five, ten, even twenty years. It's not realistic. So, specializing in some technology that essentially binds you to a job isn't really an effective way to make sure you can pay your bills for the rest of your life.
You have to be able to pick up quickly any given technology or any stack, whether it’s new or old. Every company has their own stack that they are developing. You also have to remember that there is plenty of old existing software out there that no one really talks about anymore. People need to maintain and take care of it.
So, whether you are learning a new technology or an old technology, the role of the developer now, much more so in the past, is to be more of a generalist who can quickly learn anything without support from their employer.
You're not going to get a lot of slack to learn things in a given time, paid training, and things like that. You're pretty much left on your own, or there are always cheaper alternatives to go to.
So the heat is really on developers to be Type A people who are always seeking out the best option.
Gardner: Alright. Well, now that we have scared anyone from ever wanting to be a developer, Ken, help us get a little bit closer to earth. What can students do, what can professors or instructors do, to help get more of this real-world perspective into what they do in these courses and in this preparation?
Kousen: It's interesting that while the various tools and technologies evolve, some of the basic principles always hold pretty fast. I've taught this class several times and I have to say that every time I've taught it, it's been very, very different, but the overall architectural issues are pretty constant.
Plus, what seems to follow in the industry are various trends, like an increased emphasis on testing, for example, the recent rise in dynamic languages, and things like that. The idea of continually trying to follow what's going on in the marketplace and seeing what's interesting seems to be very helpful.
I also emphasize to the students that a good source of information is to find some of the better open-source projects, and not necessarily join them, use them, or do anything with them, but follow what they do and see those projects as the communal efforts of some of the best developers in the world.
So, if they all say, "Oh yeah, we obviously have to have this source-control mechanism," then maybe that's an interesting thing that should be looked at, or this particular bug reporting tool, or whatever. I often emphasis that particular direction as well.
Gardner: How about that, Todd? Are these open-source communities, these chat rooms, these forums, the real, practical lab that the students and developer should be looking towards?
Williams: I think to a degree that it's certainly a practical lab that students have easy access to. Obviously, in open source, whether it’s something like the Eclipse Foundation, Apache, or what have you, they make a very explicit effort to communicate what they are doing through either bug reports, mail lists, and discussion groups. So, it's an easy way to get involved as just a monitor of what's going on. I think you could learn quite a bit from just seeing how the interactions play out.
That's not exactly the same type of environment they would see inside closed-wall corporate development, simply because the goals are different. Less emphasis is put on external communications and more emphasis is put on getting quality software out the door extremely quickly. But, there are a lot of very good techniques and communication patterns to be learned in the open-source communities.
Gardner: Now, when we go to community, that also means choice, which is a good thing. But, there is also a downside to choice. There are a lot of variables, many different things to look at. Tell us a little bit about the importance of profiling, and when you have got many new plug-ins to choose from, and you've got lots of commentary and social media being generated about what to use and what not to use.
Give us, Todd, if you could, some idea of the problem set that you saw in the marketplace a couple of years ago when you were thinking about Pulse.
Williams: Let me take a step back and quickly explain what Pulse is for those who aren't familiar with it. We built a general-purpose software provisioning system that right now we are targeting at the Eclipse market, specifically Eclipse developers.
For our initial release last November, we focused on providing a simple, intuitive way that you could install, update, and share custom configurations with Eclipse-based tools.
In Pulse 2, which is our current release, we have extended those capabilities to address what we like to call team-synchronization problems. That includes not only customized tool stacks, but also things like workspace project configurations and common preference settings.
Now you can have a team that stays effectively in lock step with both their tools and their workspaces and preferences.
What drove us to build something like this were a number of things. If you look at the Eclipse market, where we have been for a number of years, there are literally thousands of products and plug-ins for Eclipse. If you just want to go out and take a survey of them, or try some of them, it's a very daunting process for most people.
It starts out when you download Eclipse, go find some plug-ins, possibly looking into Eclipse Plug-in Central, find those update sites, type them in, download the plug-ins, and try them. This pattern repeats for quite some time, while the developer goes out and tries to figure out which of the plug-ins are good and which ones aren't.
With Pulse, we put these very popular, well-researched plug-ins into a catalog, so that you can configure these types of tool stacks with drag-and-drop. So, it's very easy to try new things. We also bring in some of the social aspects; pulling in the rankings and descriptions from other sources like Eclipse Plug-in Central and those types of things.
So, within Pulse, you have a very easy way to start out with some base technology stacks for certain kinds of development and you can easily augment them over time and then share them with others.
Gardner: Ken, help us understand how this can be used in the training and/or academic setting? What is it about Pulse that brings in more of the real world, and anticipates what choices developers are going to have once they get into the nitty-gritty of coding?
Kousen: Looking at academic and training settings, they are a little bit different. In a training setting, one of the real challenges the training classes face every time is getting the initial classroom set up correct. That is often very involved and complicated, because a lot of the tools involved are somewhat dependent on each other and dependent on environment variables and things like that.
So, trying to set up standard Pulse configurations and then being able to set up a classroom using those shared deployments is a very interesting opportunity. I haven't had the chance to do it yet, but I have definitely been talking to some training providers about giving that a shot.
I did try it in a classroom, and it's rather interesting, because one of the students that I had recently this year was coming from the Microsoft environment. I get a very common experience with Microsoft people, in that they are always overwhelmed by the fact, as Todd said, there are so many choices for everything. For Microsoft, there is always exactly one choice, and that choice costs $400.
I tried to tell them that here we have many, many choices, and the correct choice, or the most popular choice changes all the time. It can be very time consuming and overwhelming for them to try to decide which ones to use in which circumstances.
So, I set up a couple of configurations that I was able to share with the students. Once they were able to register and download them, they were able to get everything in a self-contained environment.
We found that pretty helpful, although I've got to say that this year the class size was sufficiently small, so that I don't know that we really got the same benefit we would get in a large classroom, where there would be many, many setup issues to deal with.
Gardner: So, almost mimicking a collaboration activity in a development setting, but in the classroom.
Kousen: Exactly.
Gardner: Are there any particular things that you learned from this exercise that those who might be evaluating and thinking about using Pulse could benefit from?
Kousen: It was pretty straightforward for everybody to use. We had to make sure that people using it had fast download speeds, but that had nothing to do with Pulse. That had to do with the size of Eclipse.
Of course, whenever you get students downloading configurations, they have this inevitable urge to start experimenting, trying to add in plug-ins, and replacing things. I did have one case where the configuration got pretty corrupted, not due to anything that they did in Pulse, but because of plug-ins they added externally. We just basically scrapped that one and started over and it came out very nicely. So, that was very helpful in that case.
Gardner: Michael, as you are listening to this, is there anything that jumps out at you in terms of understanding of Eclipse and its popularity, and then dealing with complexity that you could share?
Cote: I like the comparison of the Eclipse development world, versus visual studio, versus getting the one thing, because it is very accurate. That's sort of the ethos of Java -- maximum "choosability," if you will. It's one of these things in development that takes a long time to accept, but having lots of options is often more expensive and burdensome than having fewer options. Now that said, you want to make sure that you have good fewer options.
In every development team I have been involved with in my previous lives, as it were, anytime someone new comes onto the team, it’s always an extremely difficult issue just to get their tool chain setup correctly.
Having something wrong in the tool chain, the shared tools that the whole team uses, can really be quite disruptive. That's because the way that you assume your team members are going about solving problems is slightly wrong, and so they may not have the fully optimized way that your project is based around.
I guess you could call that the commercial application of that tediousness of setting up the configuration in more of an educational or a training environment. It's difficult to just sort of give someone a print out and say, go setup your stuff like this, because you are always missing little bits, and there is a lot of nuance in how things are exactly setup in the tool chains.
Gardner: Back to you, Todd at Genuitec. Have there been any surprises since you brought Pulse to market in how it’s being used? Are there unanticipated consequences that you would like to share -- the good ones anyway?
Williams: It's been interesting. We have seen a good number of people using Pulse, the way we anticipated it, sharing their tool stacks, and publishing them for their teams.
There seems to be a lot of people that use it privately. They don't share it with anyone, but they use it to manage multiple development profiles. So they might do C++ development one day and Java development the next, or what have you, and they like to keep custom tool stacks just for those things.
Even though they are kind of an island, and we made Pulse to share amongst teams, they find a lot of value in it, just to keep everything tidy.
Cote: If I can add to that, I personally haven't seen people using Pulse like this, because I haven't stuck my head in a developer shop when Pulse has been around. We would typically have a problem where -- across different versions of the project you are working on -- you would have your ID or your tools set up differently.
So, if you wanted to very quickly switch between those different versions, for example, to support or do some debugging in an old version, if there was some support issue, that switching cost between the two setups is a big part of going to fix an older version of something.
Nowadays, you have a lot of virtualization, so you can make this step a little easier, but you end up doing absurd things, like just having machines dedicated to specific versions of the software that you are working on.
If you can more easily switch between the profiles and the configurations that you have, then you can hopefully make it easier and less tedious to support these older products that you tend to have a lot of requests to support.
Gardner: Ken, did you see some advice that you might offer to those, either in academia or in the training field, things that they might want to consider as they are evaluating such things as Pulse?
Kousen: I agree with what the others were saying about the idea of setting up a series of alternative profiles that match the environment you are going to be working in.
I realized, as Michael and Todd were saying that, that I actually do that myself. I have a J2EE profile or Java EE profile, and I also have a regular Java profile, when I am working on different things, because there are certain shortcuts that won't conflict with anything in Java EE, if I use it in Java.
Eventually, I hope when you wind up adding Grails support or Groovy and Grails support to Pulse, it will probably have a configuration environment for that as well. The idea of having a variety of profiles that could each be used in its given time is very helpful.
I know that in a training environment we will definitely try to do that. We will be setting up alternative profiles that can be shared in a particular training class.
Academically, I like to leave things a bit more free form, although I agree that the initial setup is very helpful, because if the students don't have any feel for the environment at all, getting them over that initial hurdle is very, very helpful. After that, letting them experiment is always very, very useful. So that's good.
Gardner: Todd, Ken mentioned support for Ruby, dynamic languages, Groovy. Can you tip your hand a little bit and let us know what you've got in mind in that regard?
Williams: Actually, all of those things are in the Pulse catalog right now. Sometimes they are hard to find, because it's kind of big, but we added search to it to help you run them down. But, there are actually multiple Ruby solutions; I know Groovy is in there.
If a particular solution that you like isn't in there though, it's relatively straightforward to add it, not to the catalog, but you can still add it very, very easily to any of your profiles, either locally or shared.
So, the catalog is like a really good starting point that we try to keep up to date with what our users ask us to put into it. On the other hand, if it contains everything in the world, it gets a bit unwieldy as well.
Kousen: Dana, can I comment on that? I did speak very quickly on that issue. There is a Groovy plug-in in there. I was actually very pleased to see that, because I was concerned.
I've been using the Groovy plug-in for a while, and I wasn't sure whether that was going to be in the catalog at all. It did take me a while to find it, because it was filed under an area that I wasn't expecting, but once I put it in the search box, then it showed up immediately.
The only thing about Grails is that there isn't really a dedicated Grails plug-in yet, and the Groovy plug-in is really moving towards something like that. So, when that becomes available, I'm sure it will be incorporated into Pulse.
By the way, another issue that is very useful is that when I am browsing inside Pulse, just looking around to see what sort of components have been added, it's interesting to see what turns out to be popular; things that I hadn't really anticipated.
For example, I have been using Subclipse for the Subversion plug-in for a couple of years now. In browsing into the Version Control category I see that there are various other Subversion plug-ins as well and also others coming down the line. So that was another capability that I didn't anticipate and found rather interesting.
Gardner: Todd, looking forward a little bit, it seems that this profile information, while useful in a tactical sense, might actually have some strategic value too.
I'm thinking about the metadata that might be available through a profile and a definition of what a developer wants to do from an activity or behavioral or a pattern base. Then, applying that to when they do a search; perhaps refining the search based on their profile at that time, or perhaps using the profile in regard to when they do testing, bills, other aspects of lifecycle management for development.
Have you taken this a step further, where we could take these profiles and use them in a more strategic fashion, or is that something you are looking at?
Williams: That's a great question, Dana. Of course, we have a very large product plan for Pulse. We've only had it out since November, but you're right. We do have a lot of profile information, so if we chose to mine that data, we could find some correlations between the tools that people use, like some of the buying websites do.
People who buy this product also like this one, and we could make ad hoc recommendations, for example. It seems like most people that use Subversion also use Ruby or something, and you just point them to new things in the catalog. It's kind of a low-level way to add some value. So there are certainly some things under consideration.
Gardner: Michael, what do you think of that, taking more profile information; metadata about behaviors, uses, pattern of work, and then applying that to some of the other larger lifecycle activities in development?
Cote: Things like that work out really well, when you have the proliferation of choice that we were talking about earlier, where the systems can always be gained and everything.
This thing is a small enough subset that it doesn't happen, but just seeing sheer quantity-wise and rating-wise what people are using, helps you evaluate. I am probably making this figure up, but if there are 10 different unified modeling language (UML) plug-ins for Eclipse, then you need to somehow narrow down to the ones that are going to work out well for you.
The mixture of the fast and best way to get to that is really just to see which one is being used the most, because chances are, if people are still using it actively, its going to be a popular one. People are pretty fast to dump plug-ins that don't work well for them.
There is a place to capture the metadata or the usage data that's going around with things. That's the kind of thing that usually developers only get a chance to figure out when they are face to face with someone at a conference or other sort of events that don't happen as frequently as you might want to, to simply figure out which plug-in you might want to use.
Gardner: Any time you can take personalization information and automate that or refine searches and activities is certainly a productivity improvement, and Pulse really strikes me as setting up the opportunity to do that.
Cote: Absolutely.
Gardner: Alright. Let's stat wrapping up a little bit. Ken, any last thoughts as a technical trainer about where you would like to see this sort of capability go?
Kousen: I'm not exactly sure where I will be able to take advantage of it. Let me rephrase that. I think the current Pulse configuration is already very useful, and I'm not sure what else I need in order to start trying to incorporate it into an environment.
The only other issue that I wind up having in a training environment is setting things like environment variables onto the operating system. If there is some way we can get that into Eclipse for example, or rather into Pulse, rather than having to do it on the operating system itself -- maybe through the tools or whatever -- then that would be helpful. But I don't know. Right now, I think the situation is pretty good. I can't think of anything else concrete that I would want to add right there.
Gardner: Okay. Todd, thoughts about what educators and trainers should be considering as they look at something like Pulse, and how to exploit it and leverage it.
Williams: One thing that came to my mind, from a student's perspective, is the integrated development environments (IDEs) that are available right now; even the various configurations of Eclipse, are really made for professionals. When you take something like MyEclipse, there is just so much in it.
We need the ability to actually strip down the IDE to only what is needed for a particular exercise. For example, you could set up a profile for the first exercise of the class with just a limited set of tools that a new student would need to get their hands on. It limits the confusion factor. When you do the next exercise, you could easily update the profile; add a few additional tools to it.
So, you have kind of a selected discovery of additional tools and capabilities that coincide with the level of expertise the students are developing, as they are going up the learning curve in a particular course. I was just wondering. Is that the kind of thing that now we have enabled through having a technology like Pulse, that makes delivery of that straightforward, versus what had to be done before.
Gardner: Just for those interested in perhaps getting started, Pulse uses its network. How do people access that, how do they find it, how do you get started?
Williams: Sure. The Pulse website is www.poweredbypulse.com. There is a little 5 MB installer that you download and start running. If anyone is out in academia, and they want to use Pulse in a setting for a course, please fill out the contact page on the Website. Let us know, and we will be glad to help you with that. We really want to see usage in academia grow. We think it’s very useful. It's a free service, so please let us know, and we will be glad to help.
Gardner: Terrific. I want to thank our panelists for helping us dig a little bit into training issues, and some of the solutions that are welling up in the market to address them. We have been talking with Michael Cote, he is an analyst at RedMonk. Thank you Michael.
Cote: Absolutely.
Gardner: Todd Williams, vice president of technology, Genuitec. Appreciate your input, Todd.
Williams: Thanks, Dana, I have enjoyed it.
Gardner: Ken Kousen, independent technical trainer, president of Kousen IT, Inc., and adjunct professor at Rensselaer. Appreciate your experience and input, Ken.
Kousen: Oh, you are welcome, no problem.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.
Listen to the podcast. Sponsor: Genuitec.
Transcript of BriefingsDirect podcast on Java training and education with Genuitec Pulse for Java and Eclipse. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Sponsor: Genuitec.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Today, a sponsored podcast discussion about the complexity around Java training and education. The development toolset, the plug-ins, the community are all very fast-moving targets. It's difficult for trainers, educators -- not to mention the students and budding developers themselves -- to get a full grasp of what's expected of them, and then to find resources that are up-to-date and timely.
We're going to be discussing with some experts how better to organize Java training and education. We're going to look at the Eclipse profiles that can be organized and coordinated using Pulse, a Genuitec-organized tools configuration network function. We're going to learn about how organizations can organize their training, so that students can better anticipate what's expected of them in the real world.
To help us understand some of these issues and work toward some solutions, we're joined by Michael Cote, an analyst with RedMonk. Welcome to the show, Michael.
Michael Cote: Hello, everybody.
Gardner: We're also joined by Todd Williams, vice president of technology at Genuitec. Welcome to the show, Todd.
Todd Williams: Thanks very much, Dana.
Gardner: Lastly, we're joined by Ken Kousen, an independent technical trainer and president of Kousen IT, Inc. He's also an adjunct professor at Rensselaer Polytechnic Institute. Welcome, Ken.
Ken Kousen: Hi, Dana, glad to be here.
Gardner: As I mentioned, complexity and moving targets are part of the problem, but it also seems that there is a disconnect between book knowledge or training knowledge that one gathers about development and what happens in the real world, what goes on with code being checked in and checked out, and how teams are organized. It seems difficult for someone to anticipate what's really going happen.
Let's go to Ken first. Ken, what is the gap, from your perspective, between what students and budding developers get through training and in university settings, and then what's often expected of them in the real world?
Kousen: It's interesting. The gap between what's taught in academia and what's taught in the real world is very large, actually. The classes I teach tend to be in a master's level program, and I teach a couple of classes in developing enterprise applications that are specifically constructed to address this gap.
Academia will talk about abstractions of data structures, algorithms, and different techniques for doing things. Then, when people get into the real world, they have no idea what Spring, Hibernate, or any of the other issues really are.
It's also interesting that a lot of developments in this field tend to flow from the working professionals toward academia, rather than the other way around, which is what you would find in engineering, when I used to be in that area.
Gardner: Todd, when you're doing hiring or are doing development for business, and you're talking to your customers -- folks that use MyEclipse and your services for training and consulting -- are you seeing a worsening situation in terms of how to acquire qualified labor, or do people have a pretty good sense of where to go to find good Java developers?
Williams: Finding quality employees is always a challenge, and probably always will be. Part of what I see as being difficult, especially in the Java and Enterprise Java market, is the huge number of technologies that are being employed at different levels. Each company picks its own type of stack.
Ken mentioned Spring and Hibernate. There is also Java transaction API (JTA), Java server faces (JSF), and Struts, Web framework and persistence technologies, and application servers that are in use. Finding employees that fit with what you are trying to do today, with an eye toward being able to mature them into where you are going tomorrow, is probably going to always be the concern.
Gardner: Now, what's been going on with development, not just the function, but teams, the collaboration, agile types of activities, Scrum? It used to be that people could specialize, stay in one little niche, but now the "master of all trades" seems to be more in demand.
Let's go to Michael. Michael, is development fundamentally changing? When we think of developers, do we need to recast how we imagine them or conceive of them?
Cote: Yes. I think it's fair even to go to the extreme and say absolutely. You look at the employment patterns that most developers find themselves in, and they are not really working at some place three, five, ten, even twenty years. It's not realistic. So, specializing in some technology that essentially binds you to a job isn't really an effective way to make sure you can pay your bills for the rest of your life.
You have to be able to pick up quickly any given technology or any stack, whether it’s new or old. Every company has their own stack that they are developing. You also have to remember that there is plenty of old existing software out there that no one really talks about anymore. People need to maintain and take care of it.
So, whether you are learning a new technology or an old technology, the role of the developer now, much more so in the past, is to be more of a generalist who can quickly learn anything without support from their employer.
You're not going to get a lot of slack to learn things in a given time, paid training, and things like that. You're pretty much left on your own, or there are always cheaper alternatives to go to.
So the heat is really on developers to be Type A people who are always seeking out the best option.
Gardner: Alright. Well, now that we have scared anyone from ever wanting to be a developer, Ken, help us get a little bit closer to earth. What can students do, what can professors or instructors do, to help get more of this real-world perspective into what they do in these courses and in this preparation?
Kousen: It's interesting that while the various tools and technologies evolve, some of the basic principles always hold pretty fast. I've taught this class several times and I have to say that every time I've taught it, it's been very, very different, but the overall architectural issues are pretty constant.
Plus, what seems to follow in the industry are various trends, like an increased emphasis on testing, for example, the recent rise in dynamic languages, and things like that. The idea of continually trying to follow what's going on in the marketplace and seeing what's interesting seems to be very helpful.
I also emphasize to the students that a good source of information is to find some of the better open-source projects, and not necessarily join them, use them, or do anything with them, but follow what they do and see those projects as the communal efforts of some of the best developers in the world.
So, if they all say, "Oh yeah, we obviously have to have this source-control mechanism," then maybe that's an interesting thing that should be looked at, or this particular bug reporting tool, or whatever. I often emphasis that particular direction as well.
Gardner: How about that, Todd? Are these open-source communities, these chat rooms, these forums, the real, practical lab that the students and developer should be looking towards?
Williams: I think to a degree that it's certainly a practical lab that students have easy access to. Obviously, in open source, whether it’s something like the Eclipse Foundation, Apache, or what have you, they make a very explicit effort to communicate what they are doing through either bug reports, mail lists, and discussion groups. So, it's an easy way to get involved as just a monitor of what's going on. I think you could learn quite a bit from just seeing how the interactions play out.
That's not exactly the same type of environment they would see inside closed-wall corporate development, simply because the goals are different. Less emphasis is put on external communications and more emphasis is put on getting quality software out the door extremely quickly. But, there are a lot of very good techniques and communication patterns to be learned in the open-source communities.
Gardner: Now, when we go to community, that also means choice, which is a good thing. But, there is also a downside to choice. There are a lot of variables, many different things to look at. Tell us a little bit about the importance of profiling, and when you have got many new plug-ins to choose from, and you've got lots of commentary and social media being generated about what to use and what not to use.
Give us, Todd, if you could, some idea of the problem set that you saw in the marketplace a couple of years ago when you were thinking about Pulse.
Williams: Let me take a step back and quickly explain what Pulse is for those who aren't familiar with it. We built a general-purpose software provisioning system that right now we are targeting at the Eclipse market, specifically Eclipse developers.
For our initial release last November, we focused on providing a simple, intuitive way that you could install, update, and share custom configurations with Eclipse-based tools.
In Pulse 2, which is our current release, we have extended those capabilities to address what we like to call team-synchronization problems. That includes not only customized tool stacks, but also things like workspace project configurations and common preference settings.
Now you can have a team that stays effectively in lock step with both their tools and their workspaces and preferences.
What drove us to build something like this were a number of things. If you look at the Eclipse market, where we have been for a number of years, there are literally thousands of products and plug-ins for Eclipse. If you just want to go out and take a survey of them, or try some of them, it's a very daunting process for most people.
It starts out when you download Eclipse, go find some plug-ins, possibly looking into Eclipse Plug-in Central, find those update sites, type them in, download the plug-ins, and try them. This pattern repeats for quite some time, while the developer goes out and tries to figure out which of the plug-ins are good and which ones aren't.
With Pulse, we put these very popular, well-researched plug-ins into a catalog, so that you can configure these types of tool stacks with drag-and-drop. So, it's very easy to try new things. We also bring in some of the social aspects; pulling in the rankings and descriptions from other sources like Eclipse Plug-in Central and those types of things.
So, within Pulse, you have a very easy way to start out with some base technology stacks for certain kinds of development and you can easily augment them over time and then share them with others.
Gardner: Ken, help us understand how this can be used in the training and/or academic setting? What is it about Pulse that brings in more of the real world, and anticipates what choices developers are going to have once they get into the nitty-gritty of coding?
Kousen: Looking at academic and training settings, they are a little bit different. In a training setting, one of the real challenges the training classes face every time is getting the initial classroom set up correct. That is often very involved and complicated, because a lot of the tools involved are somewhat dependent on each other and dependent on environment variables and things like that.
So, trying to set up standard Pulse configurations and then being able to set up a classroom using those shared deployments is a very interesting opportunity. I haven't had the chance to do it yet, but I have definitely been talking to some training providers about giving that a shot.
I did try it in a classroom, and it's rather interesting, because one of the students that I had recently this year was coming from the Microsoft environment. I get a very common experience with Microsoft people, in that they are always overwhelmed by the fact, as Todd said, there are so many choices for everything. For Microsoft, there is always exactly one choice, and that choice costs $400.
I tried to tell them that here we have many, many choices, and the correct choice, or the most popular choice changes all the time. It can be very time consuming and overwhelming for them to try to decide which ones to use in which circumstances.
So, I set up a couple of configurations that I was able to share with the students. Once they were able to register and download them, they were able to get everything in a self-contained environment.
We found that pretty helpful, although I've got to say that this year the class size was sufficiently small, so that I don't know that we really got the same benefit we would get in a large classroom, where there would be many, many setup issues to deal with.
Gardner: So, almost mimicking a collaboration activity in a development setting, but in the classroom.
Kousen: Exactly.
Gardner: Are there any particular things that you learned from this exercise that those who might be evaluating and thinking about using Pulse could benefit from?
Kousen: It was pretty straightforward for everybody to use. We had to make sure that people using it had fast download speeds, but that had nothing to do with Pulse. That had to do with the size of Eclipse.
Of course, whenever you get students downloading configurations, they have this inevitable urge to start experimenting, trying to add in plug-ins, and replacing things. I did have one case where the configuration got pretty corrupted, not due to anything that they did in Pulse, but because of plug-ins they added externally. We just basically scrapped that one and started over and it came out very nicely. So, that was very helpful in that case.
Gardner: Michael, as you are listening to this, is there anything that jumps out at you in terms of understanding of Eclipse and its popularity, and then dealing with complexity that you could share?
Cote: I like the comparison of the Eclipse development world, versus visual studio, versus getting the one thing, because it is very accurate. That's sort of the ethos of Java -- maximum "choosability," if you will. It's one of these things in development that takes a long time to accept, but having lots of options is often more expensive and burdensome than having fewer options. Now that said, you want to make sure that you have good fewer options.
In every development team I have been involved with in my previous lives, as it were, anytime someone new comes onto the team, it’s always an extremely difficult issue just to get their tool chain setup correctly.
Having something wrong in the tool chain, the shared tools that the whole team uses, can really be quite disruptive. That's because the way that you assume your team members are going about solving problems is slightly wrong, and so they may not have the fully optimized way that your project is based around.
I guess you could call that the commercial application of that tediousness of setting up the configuration in more of an educational or a training environment. It's difficult to just sort of give someone a print out and say, go setup your stuff like this, because you are always missing little bits, and there is a lot of nuance in how things are exactly setup in the tool chains.
Gardner: Back to you, Todd at Genuitec. Have there been any surprises since you brought Pulse to market in how it’s being used? Are there unanticipated consequences that you would like to share -- the good ones anyway?
Williams: It's been interesting. We have seen a good number of people using Pulse, the way we anticipated it, sharing their tool stacks, and publishing them for their teams.
There seems to be a lot of people that use it privately. They don't share it with anyone, but they use it to manage multiple development profiles. So they might do C++ development one day and Java development the next, or what have you, and they like to keep custom tool stacks just for those things.
Even though they are kind of an island, and we made Pulse to share amongst teams, they find a lot of value in it, just to keep everything tidy.
Cote: If I can add to that, I personally haven't seen people using Pulse like this, because I haven't stuck my head in a developer shop when Pulse has been around. We would typically have a problem where -- across different versions of the project you are working on -- you would have your ID or your tools set up differently.
So, if you wanted to very quickly switch between those different versions, for example, to support or do some debugging in an old version, if there was some support issue, that switching cost between the two setups is a big part of going to fix an older version of something.
Nowadays, you have a lot of virtualization, so you can make this step a little easier, but you end up doing absurd things, like just having machines dedicated to specific versions of the software that you are working on.
If you can more easily switch between the profiles and the configurations that you have, then you can hopefully make it easier and less tedious to support these older products that you tend to have a lot of requests to support.
Gardner: Ken, did you see some advice that you might offer to those, either in academia or in the training field, things that they might want to consider as they are evaluating such things as Pulse?
Kousen: I agree with what the others were saying about the idea of setting up a series of alternative profiles that match the environment you are going to be working in.
I realized, as Michael and Todd were saying that, that I actually do that myself. I have a J2EE profile or Java EE profile, and I also have a regular Java profile, when I am working on different things, because there are certain shortcuts that won't conflict with anything in Java EE, if I use it in Java.
Eventually, I hope when you wind up adding Grails support or Groovy and Grails support to Pulse, it will probably have a configuration environment for that as well. The idea of having a variety of profiles that could each be used in its given time is very helpful.
I know that in a training environment we will definitely try to do that. We will be setting up alternative profiles that can be shared in a particular training class.
Academically, I like to leave things a bit more free form, although I agree that the initial setup is very helpful, because if the students don't have any feel for the environment at all, getting them over that initial hurdle is very, very helpful. After that, letting them experiment is always very, very useful. So that's good.
Gardner: Todd, Ken mentioned support for Ruby, dynamic languages, Groovy. Can you tip your hand a little bit and let us know what you've got in mind in that regard?
Williams: Actually, all of those things are in the Pulse catalog right now. Sometimes they are hard to find, because it's kind of big, but we added search to it to help you run them down. But, there are actually multiple Ruby solutions; I know Groovy is in there.
If a particular solution that you like isn't in there though, it's relatively straightforward to add it, not to the catalog, but you can still add it very, very easily to any of your profiles, either locally or shared.
So, the catalog is like a really good starting point that we try to keep up to date with what our users ask us to put into it. On the other hand, if it contains everything in the world, it gets a bit unwieldy as well.
Kousen: Dana, can I comment on that? I did speak very quickly on that issue. There is a Groovy plug-in in there. I was actually very pleased to see that, because I was concerned.
I've been using the Groovy plug-in for a while, and I wasn't sure whether that was going to be in the catalog at all. It did take me a while to find it, because it was filed under an area that I wasn't expecting, but once I put it in the search box, then it showed up immediately.
The only thing about Grails is that there isn't really a dedicated Grails plug-in yet, and the Groovy plug-in is really moving towards something like that. So, when that becomes available, I'm sure it will be incorporated into Pulse.
By the way, another issue that is very useful is that when I am browsing inside Pulse, just looking around to see what sort of components have been added, it's interesting to see what turns out to be popular; things that I hadn't really anticipated.
For example, I have been using Subclipse for the Subversion plug-in for a couple of years now. In browsing into the Version Control category I see that there are various other Subversion plug-ins as well and also others coming down the line. So that was another capability that I didn't anticipate and found rather interesting.
Gardner: Todd, looking forward a little bit, it seems that this profile information, while useful in a tactical sense, might actually have some strategic value too.
I'm thinking about the metadata that might be available through a profile and a definition of what a developer wants to do from an activity or behavioral or a pattern base. Then, applying that to when they do a search; perhaps refining the search based on their profile at that time, or perhaps using the profile in regard to when they do testing, bills, other aspects of lifecycle management for development.
Have you taken this a step further, where we could take these profiles and use them in a more strategic fashion, or is that something you are looking at?
Williams: That's a great question, Dana. Of course, we have a very large product plan for Pulse. We've only had it out since November, but you're right. We do have a lot of profile information, so if we chose to mine that data, we could find some correlations between the tools that people use, like some of the buying websites do.
People who buy this product also like this one, and we could make ad hoc recommendations, for example. It seems like most people that use Subversion also use Ruby or something, and you just point them to new things in the catalog. It's kind of a low-level way to add some value. So there are certainly some things under consideration.
Gardner: Michael, what do you think of that, taking more profile information; metadata about behaviors, uses, pattern of work, and then applying that to some of the other larger lifecycle activities in development?
Cote: Things like that work out really well, when you have the proliferation of choice that we were talking about earlier, where the systems can always be gained and everything.
This thing is a small enough subset that it doesn't happen, but just seeing sheer quantity-wise and rating-wise what people are using, helps you evaluate. I am probably making this figure up, but if there are 10 different unified modeling language (UML) plug-ins for Eclipse, then you need to somehow narrow down to the ones that are going to work out well for you.
The mixture of the fast and best way to get to that is really just to see which one is being used the most, because chances are, if people are still using it actively, its going to be a popular one. People are pretty fast to dump plug-ins that don't work well for them.
There is a place to capture the metadata or the usage data that's going around with things. That's the kind of thing that usually developers only get a chance to figure out when they are face to face with someone at a conference or other sort of events that don't happen as frequently as you might want to, to simply figure out which plug-in you might want to use.
Gardner: Any time you can take personalization information and automate that or refine searches and activities is certainly a productivity improvement, and Pulse really strikes me as setting up the opportunity to do that.
Cote: Absolutely.
Gardner: Alright. Let's stat wrapping up a little bit. Ken, any last thoughts as a technical trainer about where you would like to see this sort of capability go?
Kousen: I'm not exactly sure where I will be able to take advantage of it. Let me rephrase that. I think the current Pulse configuration is already very useful, and I'm not sure what else I need in order to start trying to incorporate it into an environment.
The only other issue that I wind up having in a training environment is setting things like environment variables onto the operating system. If there is some way we can get that into Eclipse for example, or rather into Pulse, rather than having to do it on the operating system itself -- maybe through the tools or whatever -- then that would be helpful. But I don't know. Right now, I think the situation is pretty good. I can't think of anything else concrete that I would want to add right there.
Gardner: Okay. Todd, thoughts about what educators and trainers should be considering as they look at something like Pulse, and how to exploit it and leverage it.
Williams: One thing that came to my mind, from a student's perspective, is the integrated development environments (IDEs) that are available right now; even the various configurations of Eclipse, are really made for professionals. When you take something like MyEclipse, there is just so much in it.
We need the ability to actually strip down the IDE to only what is needed for a particular exercise. For example, you could set up a profile for the first exercise of the class with just a limited set of tools that a new student would need to get their hands on. It limits the confusion factor. When you do the next exercise, you could easily update the profile; add a few additional tools to it.
So, you have kind of a selected discovery of additional tools and capabilities that coincide with the level of expertise the students are developing, as they are going up the learning curve in a particular course. I was just wondering. Is that the kind of thing that now we have enabled through having a technology like Pulse, that makes delivery of that straightforward, versus what had to be done before.
Gardner: Just for those interested in perhaps getting started, Pulse uses its network. How do people access that, how do they find it, how do you get started?
Williams: Sure. The Pulse website is www.poweredbypulse.com. There is a little 5 MB installer that you download and start running. If anyone is out in academia, and they want to use Pulse in a setting for a course, please fill out the contact page on the Website. Let us know, and we will be glad to help you with that. We really want to see usage in academia grow. We think it’s very useful. It's a free service, so please let us know, and we will be glad to help.
Gardner: Terrific. I want to thank our panelists for helping us dig a little bit into training issues, and some of the solutions that are welling up in the market to address them. We have been talking with Michael Cote, he is an analyst at RedMonk. Thank you Michael.
Cote: Absolutely.
Gardner: Todd Williams, vice president of technology, Genuitec. Appreciate your input, Todd.
Williams: Thanks, Dana, I have enjoyed it.
Gardner: Ken Kousen, independent technical trainer, president of Kousen IT, Inc., and adjunct professor at Rensselaer. Appreciate your experience and input, Ken.
Kousen: Oh, you are welcome, no problem.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.
Listen to the podcast. Sponsor: Genuitec.
Transcript of BriefingsDirect podcast on Java training and education with Genuitec Pulse for Java and Eclipse. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
Cote,
Dana Gardner,
Eclipse,
Genuitec,
Interarbor,
Java,
Kousen,
Pulse,
Williams
Subscribe to:
Posts (Atom)