Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco the week of Sept. 22, 2008.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect podcast recorded at the Oracle OpenWorld conference in San Francisco. We are here the week of Sept. 22, 2008. This HP Live! Podcast is sponsored by Hewlett-Packard (HP) and distributed through the BriefingsDirect Network.
We welcome John Santaferraro, director of marketing for HP’s Business Intelligence (BI) portfolio. We're going to be talking about the intersection of BI in the context of not just business value and outcomes, but in the context of Oracle, a major data applications middleware and BI provider, and HP as prominent systems provider, as well as a prominent BI services provider.
We're going to try to figure out how this plays together. Then, we'll look toward the future of BI in the context of some major trends, such as service-oriented architecture (SOA), master data management (MDM), and bringing more automation to the delivery of intelligence from systems and data to those users who need it at the front lines of business. So I want to welcome John Santaferraro to the show.
John Santaferraro: Glad to be here, Dana. Thanks.
Gardner: First, let's set the stage and get a level-set about the Oracle-HP relationship vis-à-vis BI, because we're here at Oracle OpenWorld. Oracle is in the software side of things predominantly. You’ve got both systems and services. Perhaps you could paint a picture of how this fits together.
Santaferraro: It’s been a great and long relationship that we've had with Oracle since they were first building and releasing a database. We had folks in our labs that understood this idea of databases and data warehousing, and they were actually building and architecting our systems in a special way with things like massive I/O, massive memory to address -- the kinds of things you need in a data warehouse and query environment.
Back in those days, we were actually building our systems to handle data warehouse workloads, when everybody else was still focused only on the regular online transaction processing (OLTP) kinds of transactions in the enterprise resource planning (ERP) systems.
Because of that natural connection that we had with what was going in our labs, and what Oracle was doing, we have from the very start built a tight relationship with them from an engineering perspective and a good market perspective. Oracle is very clearly a leader in data warehousing and BI, and we augment that with the systems that we have developed to run in an optimized way with Oracle, as well as some other services that we bring to bear.
We recently bought a company called Knightsbridge, which was known as the go-to company for anybody who was doing data warehousing or BI and who ran into problems that nobody else could solve. Everybody knew that if you went to Knightsbridge, there were people there who could solve those problems. So it’s great to have them at the center of our global BI services organization. This company has taken their methodology and their expertise and has transferred it to folks around the world.
The other great thing about the acquisition of Knightsbridge is that they have real deep expertise in their various vertical markets -- health and life sciences, communications, financial services, retail manufacturing. Because of that, the Oracle-HP relationship is strengthening.
We are more than a systems provider and more than a services provider. We are delivering real solutions to our customers. We can come alongside of anybody, talk to them at the level of the business, and be able to build data warehousing and BI solutions that are mapped to the business, not just technology.
Gardner: I just got back from listening to Thomas Kurian at Oracle describe their full portfolio, and they’ve really put together quite a full lifecycle approach around the gathering, cleansing, and organizing of data, integrating it from disparate sources, managing the scale of huge loads, making this closer to a real-time value. They're also exporting middleware for application integration, creating the BI analytics, and then delivering that back out to those business applications.
It’s quite an impressive portfolio. They've been putting it together for quite some time, and they’re also quite proud of the metrics around the performance, and getting closer to that real-time nirvana. Tell us a little bit about how what Oracle has done from the lifecycle perspective and what you think are important aspects of the services’ side of making organizations readily able to let exploit those technologies.
Santaferraro: What you described is very much a product lifecycle in the data warehouse and BI space. Along with that, you can go in two directions. Along with the product lifecycle, there is actually a system lifecycle as well. Anytime anybody says to me that they can make data warehousing simple, I react, because the truth is that it’s very complex.
The processes you just described are extremely difficult for any company to work with and navigate through. Add to that the whole infrastructure piece of it. The more you move towards “operationalizing” BI, suddenly the more important the infrastructure becomes.
A lot of time we get calls from customers who are trying to deploy data warehousing solutions. They'll be in test and development and are supposed to perform, and they've got users out there who are expecting to click on a button and get all of the information back within a matter of seconds, and they can’t figure out how to make it work.
So they call the HP storage folks and they say, "Hey, we’ve got a storage problem. What’s going on here?" And, the storage folks say, "Well, wait a minute, it's not storage. That sounds like the database." So, they call Oracle, and Oracle says, “Well, that’s not us. It’s not the database. It must be a server problem.” So the customer has to go back to the server guy. We have people that will lose weeks of time in deploying their systems, because the entire lifecycle is extremely complex.
What we really do is look at how can we come alongside of Oracle in our labs and figure out how to build those systems with Oracle, pre-installed, pre-configured, and pre-tested, so that what the customer is getting is ready to go out of the box. It takes the guesswork out of all of this implementation and development that they’ve got to do.
I had one customer who lost a week in production, lost a week in test and dev, went into production and made the same exact little thing. They forgot to turn on a synchronous I/O on their storage system. It’s just a basic little problem, but it cost them another week in production time before they were up and running.
So, we’ve got solutions like HP BladeSystem for Oracle Optimized Warehouse. We have about 50 reference configurations that help take the guesswork out of deploying these.
Gardner: This is really more than just one hand washing the other. This is three hands washing each other. We have the systems integration and specialized software, which is created through products, integration, and technology innovation, and then the opportunity for that third hand of services to come in with methodologies and best practices, for preventing those gotchas.
Santaferraro: Exactly. And then, on the services’ side, here are people who have walked this path before. They’ve done it before. My recommendation to companies who are out there trying to do BI and data warehousing and are hitting difficulties is, “Why not go find somebody who has done it before?”
You really don’t have to do it alone. There are people out there who have walked this path. They’ve done it. They know the gotchas. They have accelerators. They have ways of making it all come together faster. And all of that translates into more business value. If I don’t have to spend as much time in deployment, as much time in all of the testing and trying to figure out what is wrong, then I can be investing my time and my effort in developing real business innovation and real business value.
Gardner: And, of course, in the field there are many different companies that are at different places on the path toward some of these goals. For those that are deeply into BI and recognize the value of getting this lifecycle, elevating the data, getting that good quality data out, and then be able to work with it, what’s the next step?
I’m hearing some buzzwords nowadays about operational BI and even BI modernization. Tell me little bit about what these mean, and are these in fact the next chapters in where companies will be taking this capability?
Santaferraro: Yes, these are definitely the next chapters, and you're seeing right now probably about five percent of companies out there -- the ones who are on the leading or bleeding edge -- already doing Operational BI and BI Modernization.
Operational BI has to do with this idea that I have all of this data in a single place, it’s accessible, and it’s fairly well cleaned. I don’t think anybody has perfectly clean data -- that doesn’t exit -- but once it’s there, what do I do with it?
We're finding that customers want to do two things. One, they want to get that information to everyone across the organization, as well as customers and partners, and they want it to be actionable. So how do I get actionable information in the hands of everyone across my organization who needs it?
The second thing I see is people wanting to do with operational BI is actually take the analytics that are driving their systems, and embed them in the business processes or in the business applications. When a loan comes in to be underwritten, you want to have the right rules that don't put you in a position as a bank where you end up with a bunch of loans that you can't sell in the secondary market, or going into default. Everybody is aware of that problem, right?
How do you take the analytics and discovery that you’ve made and put it right in the applications, so the decision is automatically made by the application or so somebody has it right there. As they are using the business application, they have the information to make the decision right there at their disposal.
Gardner: And is that what you call operational BI?
Santaferraro: Yes.
Gardner: Now, this also raises in my mind a question about the capabilities that a services oriented architecture (SOA) offers -- governance, bringing services like BI as a service into play with applications, but at the right point in time. So it's exercising governance policy; learning from your mistakes, and building on them. How does what you’re describing as operational BI and SOA fit together?
Santaferraro: It’s a great question, because when I hear people talking about SOA, I primarily hear them talking about business services. How do I take these mammoth applications that I’ve built, reduce them into reusable business services, and be able to use them effectively across the organization, instead of replicating them all over. The real opportunity comes when you have these business services in operation and you begin to bring in information services as well. Take customer profitability, for example. That's not really a business service. It’s an information service.
A lot of analysis has to go into the mix for companies to figure out or answer the question, "Who are my most profitable customers?" If you can figure that out, and give every customer a rating, then that information service again becomes a service within a SOA that you can actually use and distribute in a very useful way all across the organization. You can send it to the call center, send it to the sales force, send it to the Web, and send it to the ATM transactions that are happening. So there's a whole opportunity of information services as a part of SOA that haven't even begun to be tapped.
Gardner: It’s sort of the intelligent implementation of BI as a service?
Santaferraro: Absolutely.
Gardner: How does that differ from BI modernization?
Santaferraro: Modernization is built around this whole concept that folks started doing data warehousing 15 to 20 years ago. It’s a fairly old technology; yet it’s still very useful. It’s still something that companies need to do, but a lot of new technology has come in and new kinds of data. We are discovering that data warehousing had great value. It has all the information in a single place. It made information accessible. You could now do analysis.
Gardner: But it was largely structured data.
Santaferraro: Exactly. Now we have other kinds of data coming. What about email? What about document management systems, and all the documents that are being digitized? What about new types of data like RFID? What about GPS data? There are all these new types of data, and we're discovering now that the data warehouse bubbled up.
It's a great value for BI, but not everything has to go into the data warehouse. In fact, we’ve discovered with a lot of our customers that as soon as the data warehouse gets to a terabyte, about 70 percent of the data in that data warehouse never even gets touched or used.
So companies are spending enormous amounts of money to build these massive data warehouses, and a lot of that is not being used. Modernization is about figuring out what data needs to go into the data warehouse and what needs to be delivered through the enterprise service bus (ESB). Are there certain things where you can just embed analytics out at the application layer and do the analytics out there? Are there other types of data that should be just cataloged at the user level?
Gardner: Metadata, for example?
Santaferraro: Yes, and metadata becomes the rich side of definitions around that content, that actually brings it all together for the sake of the user.
Gardner: Regardless of where it resides?
Santaferraro: Exactly, and that becomes active metadata by the way. It’s no longer just this metadata that sits below for the data folks to understand what’s there. It’s active metadata that the users are using to understand the information that they're looking at.
Gardner: I suppose that, over time, that’s going to also include events?
Santaferraro: Absolutely, events and then tie right into the new complex event processing (CEP) systems. One of the opportunities that I’ve not seen tapped into by any software companies is this whole new world of information delivery.
So, if you’re operationalizing BI, if you’ve got a modernized BI infrastructure with data provisioning in place, and it’s not just the data warehouse -- you’re basically trying to get it out to all these users across the enterprise and embed it in business processes. There needs to be the design of a brand new information-delivery system that actually can handle all of these kinds of data to the desktop, to the application, to the hand-held device, or wherever it happens.
Gardner: Without belaboring this point, what sort of technologies are you looking at? Is this syndication, publish-and-subscribe, terminal services? What do you use to get that out there?
Santaferraro: I would say, yes. Because, as I said, I haven’t seen anybody that’s done it yet.
Gardner: Good, a big opportunity there. Okay. We've talked about this modernization of BI. This is happening in the context of other trends, of course, for virtualizing our data centers, and a lot has been done to virtualize storage and data over time.
We're going to be bringing in more kinds of content. We might even be getting content and services off of clouds, other people's public services or perhaps a cooperative private federation among business partners, all of which has to be managed and accurately projected back into the application services and processes that people use. It sounds very interesting, and is a much easier sale to the C-class, the corner office in the organization, because this really helps them in the way they do business.
What can companies do in terms of exploiting these technologies, getting those business outcomes, and, I suppose most importantly, how do they get started? As you say, this is not trivial. It’s complex and needs to be done properly.
Santaferraro: Most companies are started right now in BI and data warehousing. What I hear a lot of customers say is that they either are not getting the value out the investment they are putting into it, or they don’t know if they are. So I think it really makes sense to kind of pause where you're at and bring in some experts to do an assessment.
We do a lot of work with customers. We look at the vision, the strategy, and the planning behind data warehousing and BI, and because of our depth of experience, we can come alongside our customers and help them figure out what’s working and what’s not to put value on where to really invest moving forward, and help drive that forward in an intelligent way. Why not do BI with some intelligence behind it?
That’s one thing. The second thing is that with operational BI on the horizon, we’ve got a lot of folks within our organization who understand the potential of what could be done with BI in a bank? What if you could have customer profitability, customer segmentation services, and offer optimization at every point of sale? So, for the teller, for the ATM service, for the call center, wherever somebody is interacting with a bank, all of that information is right there with them.
What we find is that people have been so caught in the world of reporting and just basic analytics and online analytical processing that takes place in the back room. We think that it also makes sense to move to this next level. Bring in some folks who understand operational BI and let’s dream together and figure out if you could actually have these capabilities, what could you do with your company? How could you transform your relationship with your customers and your suppliers?
It's basic vision strategy and planning, too. Let’s get together and dream about operational BI, and figure out what your company could become? We actually believe that in the next five to seven years that there is going to be a major restructuring of leaders in every single industry. The ones who come out on top are going to be those companies that figure out how to use BI to transform themselves into competitive leaders.
We want to be there with our customers to make that happen for them.
Gardner: And this is not just for them to actually find new markets, but to uncover risks that they wouldn’t have been able to uncover until it was too late. And we’ve seen examples of that -- and perhaps to focus on what the right businesses are to be in and not to be in? So it’s not just how to make things better, it’s also risk mitigation on what to avoid?
Santaferraro: Absolutely.
Gardner: Very good. We’ve been talking about BI and some of the next chapters in BI, particularly in a context of a longstanding partnership between Oracle and HP. We’ve been joined by John Santaferraro, director of marketing for HP’s BI portfolio. Thanks very much, John.
Santaferraro: Thanks a lot, Dana.
Gardner: Our conversation comes to you today through a sponsored HP Live! Podcast from the Oracle OpenWorld conference in San Francisco. Look for other podcasts from this HP Live! event series at hp.com, as well as via the BriefingsDirect Network.
I'd like to thank our producers on today's show, Fred Bals and Kate Whalen. I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise IT topics and solutions. Bye for now.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Thursday, October 02, 2008
Interview: Maria Allen on How HP and EDS Uniquely Combine to Assist Financial Markets Amid Turmoil
Transcript of BriefingsDirect podcast recorded at Oracle OpenWorld conference in San Francisco the week of Sept. 22, 2008.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner principal analyst at Interarbor Solutions and you're listening to a BriefingsDirect Podcast recorded at the Oracle OpenWorld conference in San Francisco. We're here the week of Sept. 22, 2008. This HP Live! podcast is sponsored by Hewlett-Packard (HP) and distributed via the BriefingsDirect Network.
Today, we welcome Maria Allen, EDS vice president at leader of the Global Financial Services and Products Group at EDS, and HP company.
We 're going to be discussing the financial services sector at a very tumultuous time in its history, look at how HP and EDS together are bringing services to that market, and get a better understanding of how technology and transformation services can help these companies in the financial sector at a crucial time. Welcome to the show, Maria.
Maria Allen: Thank you very much. Glad to be here.
Gardner: The last few weeks and, I suppose, the last year and a half have been very eventful for Wall Street, the City of London and other major financial centers around the world.
The last two weeks have demonstrated some unprecedented volatility and, in some respects, a level of uncertainty not seen in 70 or more years. This is a time when financial companies, banks, investment banks, and insurance companies are finding themselves on the fly and government intervention is taking place at unprecedented levels.
Tell us what EDS and your financial services group do and set the stage about the history in the financial services sector. What has been put in place that allows some companies to weather the storm and react and be agile in this environment better than others?
Allen: Sure. EDS has been focusing on the financial industry for over 40 years. So, we have had quite a bit of exposure and experience. We have actually gone through many of the transformational activities within the industry.
In fact, back in the late 1980s when the Resolution Trust Corporation (RTC) took over a lot of the savings and loans institutions, EDS was actually a partner of the RTC and helped integrate and clean a lot of the portfolios that the RTC had taken over. What we try to do is bring our experience into the financial institutions to enable them to integrate to better manage their business, to have a better control over their cost, and prepare them for better times.
Many of the institutions back in the 1980s, as they are today, were very focused on reducing their cost, so that they would have additional funds to invest to get them to a better financial position. We have used our experience in the technology sector to reduce cost, integrate their systems, use our outsourcing services to better manage the business, and, again, prepare them for better times.
Gardner: Now, of course, these organizations aren't just dealing with today's worries. There are longer-term trends afoot. They're dealing with such issues as Sarbanes-Oxley, Basel II, and the payment card industry (PCI) data standards. Many of these involve important regulatory, security, and risk management issues, and we are probably going to see some more regulatory issues coming down the pike.
How do you work with these companies to put them in a position of dealing with tactical, short-term, and crisis-level issues, and also put in place what they need to adhere to for these longer-term issues around risk and compliance?
Allen: Because of our experience and the fact that we have worked with many financial institutions and banks around the world, we are constantly making sure that we have the right information and the right insight in terms of the regulatory issues that the banks are experiencing. We have worked with many of the banks to ensure that not only their datacenters and their infrastructure, but also their services, have the right key risk indicators that enables them to be compliant with the various regulations.
You mentioned Sarbanes-Oxley, Basel II, all the privacy acts, and MasterCard and Visa requirements to be PCI compliant. EDS is investing quite a bit to ensure that we are not only complying ourselves, but also that our clients meet the compliance requirements of the issuing companies.
There has been a lot of focus in the privacy and security areas. Data management is one of the areas that we have been focused on, but our experience in running the systems for many of the banks throughout the last 40 years has better positioned us to address the needs and the requirements of the regulators and the banks together.
Gardner: Of course, the role of technology has never been more important. Many business sectors look to the financial services arena for some guidance. In many cases, leading adopters are found in the financial sector. Also, they tend to keep their technology. So, there are legacy, integration, and modernization requirements, perhaps larger in number than in any other sector.
Tell us a little bit about how technology transit, service-oriented architecture (SOA), business intelligence (BI), complex event processing (CEP), risk management, and process management come together to help organizations deal with an unprecedented need for visibility and predictability during rough times.
Allen: Using our experience in financial services, we recognize the importance of having data transparency. That is absolutely the key to addressing the requirements that banks have around privacy and ensuring that they have the right key risk indicators to respond to the regulatory requirements.
We've done a lot of work in the area of data management, the integration aspects, and legacy modernization. We have quite a bit of focus in that area to help the financial institutions have better transparency across their silos.
There is still a lot of work to be done. So there is a big opportunity for EDS and HP together to really enable the transformation that the banks have really been focused on. It's difficult for them to stay on track, when they have all these other issues around regulatory compliance and the market turmoil in subprime mortgages and the credit pressures.
There is an opportunity for EDS and HP together to capitalize on the activities in the marketplace, and position ourselves as key players with the financial institutions and the government agencies. That's one of the key areas that we need to be focused on, looking at government agencies, because they are going to need a lot of help. The FDIC, with the banks that are going under that they are taking over, needs data management, and there is a huge opportunity for us to work together around that space.
Gardner: Now, EDS is in many of these government organizations, as well as these financial institutions, both public and privately held ones. This does put you in a unique position, when it comes to government taking over assets, but then having the managers within these organizations manage those assets for the government. Tell us at this early stage how you understand this could work and the unique role that that EDS and HP would play in that?
Allen: Well, that's going to be interesting to see, because there is actually a lot of discussion around that. For example, Freddy Mac and Fannie Mae, and what the federal government is going to do to integrate these two agencies. One of those agencies is a client of EDS. So, we're very well positioned to help the government not only do the integration, but also clean the portfolio to better position the agency to transform itself, because we see transformation going on at the same time.
Five or maybe ten years from now, the financial markets may be very different. It may look very different in the U.S. One of the key areas is going to be the real estate lending, residential lending business. We can capitalize from the relationships that we have, not only with the agencies, but also the banks and other financial institutions that are looking to enhance and position themselves to be a survivor during this transformation in the market.
Gardner: For the edification of our audience, HP purchased EDS recently. The completion went through, and now EDS is a wholly owned company within the HP family of companies. We just wanted to point that out. Given the opportunity in the financial sector, and given that HP has had a long services heritage as well, tell us how HP and EDS together can offer something different than they could have individually just a few months ago?
Allen: It's a big opportunity for EDS to capitalize on the relationships that HP has across the global financial industry marketplace. HP brings a set of customers that is potentially much broader than what EDS has. There are a few very large financial institutions where we both have relationships, but there is a lot of opportunity for us to work together and enable that transformation that's going on in the marketplace for financial services.
I can't wait to get started working with the HP Financial Industry Group. We have some activities scheduled in the next couple weeks to assess and determine how we address some of the activities and opportunities that we see in the market. We are already working together on a very large opportunity in the statement print area. We see a lot of more of that type of large opportunities coming to both of us.
Gardner: In addition to how the two companies come together in terms of their services, offerings, and value, there are also some adjustments in terms of channel and sales. Is there a philosophy of how to approach this market that may differ from some of your competitors, perhaps becoming more of a partner with many of the global systems integrators? How does that shake out in terms of how you actually go to market now that you are together?
Allen: We have an opportunity to definitely integrate some of our business in our channel strategy. However, there is also an opportunity to keep a very open mind as to how we go about the market, because there are some key agility alliance partners that we have who have enabled the growth in our business.
Oracle is one of them. We work very closely with Oracle in the financial industry, because of their breadth of solutions in our industry. HP also has a very strong relationship with Oracle. So, there is a lot of opportunity for us to explore different channel partnerships and different ways to address certain markets.
Gardner: I wonder if you have any sense -- and again, it's a little early -- of how IT spending in the financial services sector will pan out. There are, on one hand, the services and opportunities that you have discussed, but with consolidation, perhaps there is also some slackening in growth in some other areas. Do you have a sense at this point, given the turmoil in the financial services market, of how market growth might be impacted across a variety of the major services and product segments?
Allen: IT spending is an interesting subject that financial institutions always look at. The average spent on IT in the financial industry is about seven percent of revenue.
I have a very different approach to looking at what a financial institution spends across the board. I was look at non-interest expense, because I think it's important to look at the total cost of doing business. If you have a financial institution that's really focused on improving operations and their services, integrating the silos that they have today, and bringing some automation to SOA, then that means that their overall cost is going to be impacted positively.
Their overall non-interest expense hopefully is going to shrink, as they work with companies like EDS and HP together. I see IT cost growing, but for a total benefit, if you will. So, there is an opportunity for us to work together and see how we can impact the total spend within a financial institution.
Gardner: The pie grows, but perhaps as a percentage of revenues for individual companies it decreases.
Allen: Exactly.
Gardner: I wonder if you can share with us any case studies. If you can mention a company, that's great. If not, perhaps you can describe the type of company and project and relationship and give some examples of how EDS Financial Services has created this benefit of employing technology in such a way as that you can get more bang for the buck.
Allen: I can't mention the name of the bank, but it is a European bank, a very large mortgage lender in the U.K. EDS started working with this bank about five years ago. They had the desire to reduce their total cost of mortgage processing. So, EDS took over their operations, both their systems and their back office servicing operations to bring automation and enhance the way they approach the market.
It was very, very successful. We integrated their systems, took over their back office, reduced their total cost of mortgage processing, and were very successful within three years of taking that business over. So, we have had a lot of experience. The back-office processing does have a huge impact in a financial institution's total cost of doing business. Our experience can be applied to many different institutions and many different business operations across the financial institution.
Gardner: Well, great. We've been talking about the financial services sector, and we've been trying to understand in these tumultuous times the benefit of increasing value and lowering cost, but also increasing the need for transformation, integration, and agility, particularly as we are seeing the reformation of companies under different types of ownership, unprecedented types of ownership. And, we've been talking about how EDS and HP have come together as companies, and are going to be going out to this market with a variety of services and support.
We've been discussing this with Maria Allen. She is a vice president at EDS and leads the Global Financial Services and Products Group. We certainly appreciate your time, and your interesting comments on these subjects.
Allen: Thank you, very much.
Gardner: Our conversation today comes to you through a sponsored Hewlett-Packard Live! podcast from the Oracle Open World Conference in San Francisco. Look for other podcast from this HP Live! event series at www.hp.com, as well as via the BriefingsDirect Network.
I like to thank our producers on today's show, Fred Bals and Kate Whalen. I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise IT topics. Bye for now.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner principal analyst at Interarbor Solutions and you're listening to a BriefingsDirect Podcast recorded at the Oracle OpenWorld conference in San Francisco. We're here the week of Sept. 22, 2008. This HP Live! podcast is sponsored by Hewlett-Packard (HP) and distributed via the BriefingsDirect Network.
Today, we welcome Maria Allen, EDS vice president at leader of the Global Financial Services and Products Group at EDS, and HP company.
We 're going to be discussing the financial services sector at a very tumultuous time in its history, look at how HP and EDS together are bringing services to that market, and get a better understanding of how technology and transformation services can help these companies in the financial sector at a crucial time. Welcome to the show, Maria.
Maria Allen: Thank you very much. Glad to be here.
Gardner: The last few weeks and, I suppose, the last year and a half have been very eventful for Wall Street, the City of London and other major financial centers around the world.
The last two weeks have demonstrated some unprecedented volatility and, in some respects, a level of uncertainty not seen in 70 or more years. This is a time when financial companies, banks, investment banks, and insurance companies are finding themselves on the fly and government intervention is taking place at unprecedented levels.
Tell us what EDS and your financial services group do and set the stage about the history in the financial services sector. What has been put in place that allows some companies to weather the storm and react and be agile in this environment better than others?
Allen: Sure. EDS has been focusing on the financial industry for over 40 years. So, we have had quite a bit of exposure and experience. We have actually gone through many of the transformational activities within the industry.
In fact, back in the late 1980s when the Resolution Trust Corporation (RTC) took over a lot of the savings and loans institutions, EDS was actually a partner of the RTC and helped integrate and clean a lot of the portfolios that the RTC had taken over. What we try to do is bring our experience into the financial institutions to enable them to integrate to better manage their business, to have a better control over their cost, and prepare them for better times.
Many of the institutions back in the 1980s, as they are today, were very focused on reducing their cost, so that they would have additional funds to invest to get them to a better financial position. We have used our experience in the technology sector to reduce cost, integrate their systems, use our outsourcing services to better manage the business, and, again, prepare them for better times.
Gardner: Now, of course, these organizations aren't just dealing with today's worries. There are longer-term trends afoot. They're dealing with such issues as Sarbanes-Oxley, Basel II, and the payment card industry (PCI) data standards. Many of these involve important regulatory, security, and risk management issues, and we are probably going to see some more regulatory issues coming down the pike.
How do you work with these companies to put them in a position of dealing with tactical, short-term, and crisis-level issues, and also put in place what they need to adhere to for these longer-term issues around risk and compliance?
Allen: Because of our experience and the fact that we have worked with many financial institutions and banks around the world, we are constantly making sure that we have the right information and the right insight in terms of the regulatory issues that the banks are experiencing. We have worked with many of the banks to ensure that not only their datacenters and their infrastructure, but also their services, have the right key risk indicators that enables them to be compliant with the various regulations.
You mentioned Sarbanes-Oxley, Basel II, all the privacy acts, and MasterCard and Visa requirements to be PCI compliant. EDS is investing quite a bit to ensure that we are not only complying ourselves, but also that our clients meet the compliance requirements of the issuing companies.
There has been a lot of focus in the privacy and security areas. Data management is one of the areas that we have been focused on, but our experience in running the systems for many of the banks throughout the last 40 years has better positioned us to address the needs and the requirements of the regulators and the banks together.
Gardner: Of course, the role of technology has never been more important. Many business sectors look to the financial services arena for some guidance. In many cases, leading adopters are found in the financial sector. Also, they tend to keep their technology. So, there are legacy, integration, and modernization requirements, perhaps larger in number than in any other sector.
Tell us a little bit about how technology transit, service-oriented architecture (SOA), business intelligence (BI), complex event processing (CEP), risk management, and process management come together to help organizations deal with an unprecedented need for visibility and predictability during rough times.
Allen: Using our experience in financial services, we recognize the importance of having data transparency. That is absolutely the key to addressing the requirements that banks have around privacy and ensuring that they have the right key risk indicators to respond to the regulatory requirements.
We've done a lot of work in the area of data management, the integration aspects, and legacy modernization. We have quite a bit of focus in that area to help the financial institutions have better transparency across their silos.
There is still a lot of work to be done. So there is a big opportunity for EDS and HP together to really enable the transformation that the banks have really been focused on. It's difficult for them to stay on track, when they have all these other issues around regulatory compliance and the market turmoil in subprime mortgages and the credit pressures.
There is an opportunity for EDS and HP together to capitalize on the activities in the marketplace, and position ourselves as key players with the financial institutions and the government agencies. That's one of the key areas that we need to be focused on, looking at government agencies, because they are going to need a lot of help. The FDIC, with the banks that are going under that they are taking over, needs data management, and there is a huge opportunity for us to work together around that space.
Gardner: Now, EDS is in many of these government organizations, as well as these financial institutions, both public and privately held ones. This does put you in a unique position, when it comes to government taking over assets, but then having the managers within these organizations manage those assets for the government. Tell us at this early stage how you understand this could work and the unique role that that EDS and HP would play in that?
Allen: Well, that's going to be interesting to see, because there is actually a lot of discussion around that. For example, Freddy Mac and Fannie Mae, and what the federal government is going to do to integrate these two agencies. One of those agencies is a client of EDS. So, we're very well positioned to help the government not only do the integration, but also clean the portfolio to better position the agency to transform itself, because we see transformation going on at the same time.
Five or maybe ten years from now, the financial markets may be very different. It may look very different in the U.S. One of the key areas is going to be the real estate lending, residential lending business. We can capitalize from the relationships that we have, not only with the agencies, but also the banks and other financial institutions that are looking to enhance and position themselves to be a survivor during this transformation in the market.
Gardner: For the edification of our audience, HP purchased EDS recently. The completion went through, and now EDS is a wholly owned company within the HP family of companies. We just wanted to point that out. Given the opportunity in the financial sector, and given that HP has had a long services heritage as well, tell us how HP and EDS together can offer something different than they could have individually just a few months ago?
Allen: It's a big opportunity for EDS to capitalize on the relationships that HP has across the global financial industry marketplace. HP brings a set of customers that is potentially much broader than what EDS has. There are a few very large financial institutions where we both have relationships, but there is a lot of opportunity for us to work together and enable that transformation that's going on in the marketplace for financial services.
I can't wait to get started working with the HP Financial Industry Group. We have some activities scheduled in the next couple weeks to assess and determine how we address some of the activities and opportunities that we see in the market. We are already working together on a very large opportunity in the statement print area. We see a lot of more of that type of large opportunities coming to both of us.
Gardner: In addition to how the two companies come together in terms of their services, offerings, and value, there are also some adjustments in terms of channel and sales. Is there a philosophy of how to approach this market that may differ from some of your competitors, perhaps becoming more of a partner with many of the global systems integrators? How does that shake out in terms of how you actually go to market now that you are together?
Allen: We have an opportunity to definitely integrate some of our business in our channel strategy. However, there is also an opportunity to keep a very open mind as to how we go about the market, because there are some key agility alliance partners that we have who have enabled the growth in our business.
Oracle is one of them. We work very closely with Oracle in the financial industry, because of their breadth of solutions in our industry. HP also has a very strong relationship with Oracle. So, there is a lot of opportunity for us to explore different channel partnerships and different ways to address certain markets.
Gardner: I wonder if you have any sense -- and again, it's a little early -- of how IT spending in the financial services sector will pan out. There are, on one hand, the services and opportunities that you have discussed, but with consolidation, perhaps there is also some slackening in growth in some other areas. Do you have a sense at this point, given the turmoil in the financial services market, of how market growth might be impacted across a variety of the major services and product segments?
Allen: IT spending is an interesting subject that financial institutions always look at. The average spent on IT in the financial industry is about seven percent of revenue.
I have a very different approach to looking at what a financial institution spends across the board. I was look at non-interest expense, because I think it's important to look at the total cost of doing business. If you have a financial institution that's really focused on improving operations and their services, integrating the silos that they have today, and bringing some automation to SOA, then that means that their overall cost is going to be impacted positively.
Their overall non-interest expense hopefully is going to shrink, as they work with companies like EDS and HP together. I see IT cost growing, but for a total benefit, if you will. So, there is an opportunity for us to work together and see how we can impact the total spend within a financial institution.
Gardner: The pie grows, but perhaps as a percentage of revenues for individual companies it decreases.
Allen: Exactly.
Gardner: I wonder if you can share with us any case studies. If you can mention a company, that's great. If not, perhaps you can describe the type of company and project and relationship and give some examples of how EDS Financial Services has created this benefit of employing technology in such a way as that you can get more bang for the buck.
Allen: I can't mention the name of the bank, but it is a European bank, a very large mortgage lender in the U.K. EDS started working with this bank about five years ago. They had the desire to reduce their total cost of mortgage processing. So, EDS took over their operations, both their systems and their back office servicing operations to bring automation and enhance the way they approach the market.
It was very, very successful. We integrated their systems, took over their back office, reduced their total cost of mortgage processing, and were very successful within three years of taking that business over. So, we have had a lot of experience. The back-office processing does have a huge impact in a financial institution's total cost of doing business. Our experience can be applied to many different institutions and many different business operations across the financial institution.
Gardner: Well, great. We've been talking about the financial services sector, and we've been trying to understand in these tumultuous times the benefit of increasing value and lowering cost, but also increasing the need for transformation, integration, and agility, particularly as we are seeing the reformation of companies under different types of ownership, unprecedented types of ownership. And, we've been talking about how EDS and HP have come together as companies, and are going to be going out to this market with a variety of services and support.
We've been discussing this with Maria Allen. She is a vice president at EDS and leads the Global Financial Services and Products Group. We certainly appreciate your time, and your interesting comments on these subjects.
Allen: Thank you, very much.
Gardner: Our conversation today comes to you through a sponsored Hewlett-Packard Live! podcast from the Oracle Open World Conference in San Francisco. Look for other podcast from this HP Live! event series at www.hp.com, as well as via the BriefingsDirect Network.
I like to thank our producers on today's show, Fred Bals and Kate Whalen. I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise IT topics. Bye for now.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
EDS,
finance,
Hewlett-Packard,
HP,
Interarbor Solutions,
IT Service Management,
merger,
podcasts
Tuesday, September 30, 2008
Improved Insights and Analysis From Systems Logs Reduce Complexity Risks From Virtualization
Transcript of BriefingsDirect podcast on the infrastructure management and security challenges of virtualization.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about virtualization, and how to better improve management of virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation -- but without the pitfalls of complexity and mismanagement.
We're going to be talking about virtualization best practices with several folks who are dealing with this at several different levels. We're going to hearing from VMware, Unisys and LogLogic.
Let me introduce our panel today. First, we're joined by Charu Chaubal, senior architect for technical marketing, at VMware. Welcome, Charu.
Charu Chaubal: Thank you.
Gardner: We're also joined by Chris Hoff, chief security architect at Unisys. Hi, Chris.
Chris Hoff: Hi, how are you?
Gardner: Great. Also, Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic. Welcome to the show.
Dr. Anton Chuvakin: Hello. Thank you.
Gardner: Virtualization has certainly taken off, and this is nothing new to VMware. Organizations like Unisys are now doing quite a bit to help organizations that utilize, expand, and enjoy the benefits of virtualization. But virtualization needs to be done the correct way, without avoid pitfalls. If you do it too tactically, without allowing it to be part of an IT lifecycle and without management, then the fruits and benefits of virtualization can be largely lost.
Before we get into what virtualization can do, what to avoid, and how to better approach it, I'd like to just take a moment and try to determine why virtualization is really hot and taking off in the market now.
Let's start with Chris Hoff at Unisys. Some of these technologies have been around for many years. What is it about this point in time that is really making virtualization so hot?
Hoff: It's the confluence of quite a few things, and we see this sort of event happen in information technology (IT) quite often. You have the practically perfect storm of economics, technology, culture, and business coming together at one really interesting point in time.
The first thing that comes to mind is when people think about the benefits. The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.
These things are contributing a lot to the uptake, not to mention the advent of a lot of new technology in both hardware and software, which is starting to enable some of this to be more realistic in a business environment.
Gardner: Now over to VMware. Charu, tell us how deep and wide virtualization is emerging? It seems like people are using it in more and more ways, and in more and more places.
Chaubal: That's right. When the x86 virtualization first started out, maybe 10 years ago in a big way, it was largely being used in test and development types of environments. Over the last five years, it's definitely started to enter the production arena as well. We see more and more customers running even mission-critical applications on virtualization technologies.
Furthermore, we also see it across the board in terms of customer size, where everyone from the smallest customer to the very largest enterprises, are expanding further and further with their virtual environments.
Gardner: Let's go to LogLogic. Tell me, Anton, what sort of security and what sort of preventative measures are you helping your customers with in terms of gaining the visibility and the analytics about what's going on among these many moving parts? Many of these deployments are in now in an automated mode, more so than before they were virtualized. What are some of the issues that are you helping people deal with?
Chuvakin: You were exactly right about the visibility into the environments. As people deploy different types of IT infrastructure, first physical and now virtual, there is always a challenge of figuring out what happens with those PCs, at those PCs, which people are trying to connect to, or even attack them, and do all these at the same time around the clock.
Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.
As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.
Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.
In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.
Gardner: Now, Chris at Unisys, when you're dealing with customers, based on what we've heard about this expansion of virtualization, you're dealing with it on an applications level, and also on the infrastructure and server level.
What’s more, some folks are now getting into desktop virtualization infrastructure and delivering whole desktop interfaces out to end-user devices. This impacts not just a server. We're talking about network devices and storage devices. This is a bit more than a tactical issue. It really starts getting strategic pretty quickly.
Hoff: That's absolutely correct. If you really look at virtualization as an enabling technology or platform, as we can look out to the next three years of large companies use from the perspective of their strategic plans, you'll notice that there is a large trend toward what you might call "real-time infrastructure."
The notion here is about how you apply and take this enabling technology in the benefits of virtualization and leverage that to provide automation re-purposing. You have to deal with elements and issues that relate to charge-back for assets, as IT becomes more of a utility service.
If we look further out from there, we look at the governance issues of what it means to not really focus on hardware anymore, or even applications -- but on service and service levels. It gets a lot more strategic at times, played out all along the continuum.
While we focus virtualization on the notion of infrastructure and technology, what's really starting to happen now -- and what's important with the customers that we deal with -- is being able to unite both business process and business strategy, along with the infrastructure and the architecture that support it.
So we're a little excited and frothed up as it relates to all the benefits of virtualization today, and the bigger picture is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.
Gardner: Now, Charu at VMware, you obviously are building out what you consider the premier platform and approach to virtualization technically. You've heard, obviously, the opportunity for professional services and methodologies for approaching this, and you have third parties like LogLogic that are trying to provide better visibility across many different systems and devices.
How are you using this information in terms of what you bring to the management table for folks who are moving from, say, tactical to more strategic use of virtualization?
Chaubal: A lot of customers are expanding their virtualization so much now, to the point where they're hitting some interesting challenges that they maybe wouldn't have hit before. One great example is around compliance, such as Payment Card Industry Data Security Standards (PCI) compliance. There are a lot of questions right now around virtualizing those systems that process credit card holder data.
Chaubal: They're asking, "If I do this, am I going to be compliant with PCI? Is this something that's a realistic possibility? If it is, how do I go about demonstrating this to an auditor?"
This is where partners like LogLogic come into play, because they have the tools that can help achieve this. We believe that VMware provides a compliance-ready type of platform, so it is something you can achieve compliance with. But, in order to demonstrate and maintain that compliance, it's useful to have these tools from partners that can help you do that.
Gardner: Now, Anton at LogLogic, you're able to examine a number of different systems, gather information, correlate that information, do analytics, and provide a picture of what should be happening. Or, when something is not happening, you can look for the reasons why and look for aberrant or unusual behavior. So let's address security a little bit.
What are some of the challenges in terms of security when you move from a physical environment for compute power and resources to a virtualized environment? Then second, what about the mixture? It is obviously going to be both physical and virtualized instances of infrastructure and applications. Tell us about the security implications.
Chuvakin: I just follow the same logic I used for our recent webcast about virtualization security. In this webcast, I basically presented a full view of things that are the same and that are different in virtualized environments. I'll use the same structure, because some people who get too frothy, as Greg put it, about virtualization just stick to "virtualization changes everything." That is used sometimes as an excuse to not do things that you should continue doing in a virtualized environment.
Let's start from what things are the same. When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.
They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.
Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.
The definition of "who" didn't change. It's still a user, but what and where definitely did change. I mean, if it was done on a certain server, in virtual environment it might not be a server -- it might be a virtual image, which adds additional complexities
There are also new things that just don't have any occurrence in the physical environment -- for example, a rogue VM, a VM that is built by somebody who is not authorized to run VMs. It might be the end user who actually has his own little mini infrastructure. It brings up all sorts of forensic challenges that you have now solved. You don't just investigate a machine. You investigate a machine with a virtual platform, with another server on top, or another desktop on top.
This is my view of things that are the same that you should continue doing and things that are new that you should start learning how to audit and how to analyze the activity in the virtual environments, as well as how to do forensics, if what you have is a machine with potential a rogue VM.
Gardner: How about you, Chris at Unisys, how do you view implications for security and risk mitigation when it comes to moving increasingly into virtualized environments?
Hoff: I have to take a pretty pragmatic approach. The reality is that there are three conversations and three separate questions that need to be addressed, when you're talking about security in virtualized environments.
Unfortunately, what usually happens is that all three of them are combined into one giant question, which tends to lead to more confusion. So I like to separate the virtualization and security questions into three parts.
One of them is securing virtualization, and understanding what the impacts are on your architecture, your infrastructure, and your business process and models, when you introduce this new virtualization layer. That's really about securing the underlying virtualization platforms and understanding what happens and what changes when you introduce that, assuming that you have a decent understanding of what that means, and how that will ultimately flow down operationally.
The second point or question to address is one of virtualizing security, which is actually the operational element of, "What does it mean, and how do I go about taking what I might do in the physical world, and replicate that and/or even improve it in the virtual world?"
That's an interesting question, assuming that you have a good understanding of architecture and things that matter most to you, and how you might protect them, or how you might not be doing that. You may find several gaps today in your ability to actually do what you do in the physical world.
The third element is security through virtualization, which is okay, assuming that I have a good architectural blueprint and that I understand the impacts, the models, who and what changes operationally, how I have to go about securing things, and what benefits I get out of virtualization.
How do I actually improve my security posture by using these platforms and this technology? If you look at that, if you look at it in that way, you really are able to start dealing with the issues associated with each category. You could probably guess that if you mixed all three of them up, you could go down one path, and very easily be distracted by another.
When we break out the conversations with customers like that, it always comes back to a very basic premise that we seem to have forgotten in our industry. Despite all the technology, despite all the tools, and all of the things that go blinky-blink at night, the reality is that this comes down to being able to appropriately manage risk. That starts with understanding the things that matter to you most and using risk assessment frameworks and processes.
In a gross analogy, when you go to a grocery store and you take time to pack your frozen goods in one bag, and your canned goods and your soft goods in other bags, you use this compartmentalization, understanding what the impact is of all of the wonderful mobility, balanced with compliance and security needs.
If you got home, and you've got canned goods in with your fruit, the reality is that you've not done a good job of compartmentalizing and understanding what the impact of one good might have on the other.
The same thing applies in the virtual world. If you don't take the time to go back to the basics, understanding the impact of the infrastructure and the changes -- you're going to be a world of hurt later, even if you get the cost benefits and all the wonderful agility and mobility.
We really approach it pragmatically in a rational manner, such that people understand both the pluses, the pros and the cons of virtualization in their environments.
Gardner: We've determined that virtualization is quite hot. It's ramping up quickly. A number of studies have shown a 50-70 percent increase in the use of virtualization in the last few years. Projections continue for very fast-paced growth.
We also see a number of organizations using multiple vendors, when it comes to virtualization. We've also discussed how security and complexity apply to this, and that you need a comprehensive or contextual view of what's going on with your systems -- particularly if you have a mixture of physical and virtual.
Let's look at some examples of how this has been mitigated, how the risk has actually been decreased, and how the fruits, if you will, of virtualization are enjoyed without the pitfalls.
Let's first go to Charu at VMware. Can you offer some examples of how people have used virtualization, done it the right away, avoided some of these pitfalls, and have gained the visibility and analytics and therefore helped with their matured approach to virtualization?
Chaubal: One thing we've done at VMware over the last year and a half is try to provide as much prescriptive guidance as we can. So a lot of securing of virtualization comes down to making sure you actually deploy it [properly].
So, one thing that we've done is created hardening guides that really aim to show customers how this can be done. That's proved to be very popular among our customers.
Not to get into too much detail, but one of the main issues is the fact that you have a virtualization layer that typically has a management interface in it. Then, you have the interface that goes into your virtual machines. People need to understand that this management layer needs to be completely separated from the actual production network.
That principle is manifested in different recommendations and scenarios, when you plan a deployment and configure it. That's just one example where customers have been able to make use of our prospective guidance. Then, they architect something that is actually much more secure than possibly they would have with some preconceived notions that they might have had. I think that's one area where we are seeing success.
Gardner: Let's go to LogLogic. Anton, give us some examples, actual companies or at least use-case scenarios, where the use of LogLogic, or the methodologies that it supports, have brought to bear on virtualization – to lower the cost, increased performance, gain higher utilization, and so forth -- but without some of these risks.
Chuvakin: I'll give an example of a retail company that was using LogLogic for compliance, as well for operational usage, such as troubleshooting their servers. This company, in a separate project, was implementing virtualization to convert some of their infrastructure to a virtual machine.
At some point, those two projects mainly had their log management to track operations to satisfy PCI requirements. These issues collided with the virtualization projects, and the company realized that they now have to not just collect logs from the physical infrastructure, but also from the virtual side that is now being built.
What happened was that the logs from the virtual infrastructure were also streamed into LogLogic. Now, LogLogic has the ability to collect any type of a log. In this case, we did use that capability to collect the log, which were at the time not even supported or analyzed by LogLogic.
The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."
We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.
In this case, it was actually a great success story because we used part of the LogLogic infrastructure that doesn't rely on any preconceived notions of what the logs are. Then, they built up on top of that to help them pinpoint the issues with their VMs to see who accesses the platforms, what applications people use to manage the environment, and, basically, to track all sorts of interest in events in their virtual infrastructure.
I have to admit that it wasn't really tested on their PCI yet, but I'm pretty confident that their PCI auditors will accept what they did for the virtual environment. And, they would satisfy the requirements of PCI, which calls for logging and monitoring, as well as the requirements in the compliance mandate.
At the same time, while they are building it for that use, their analysts are already trying to do searches and look certain things that might be out of order in their VM environment. An operational use-case spontaneously emerged, and now they not only have their own idea for what to look for, but also our content to do that.
Gardner: You bring up a point here that we shouldn't overlook. This isn't something that you just build and walk away from. It requires ongoing refinement tuning. The dynamic nature of virtualization, while perhaps automated in terms of allocating resources, is an overall process that needs to be managed in order for these business outcomes to be enjoyed.
Let's go back to Chris at Unisys. Tell us about the ongoing nature of virtualization. How do you keep on top of it? How do you keep it performing well, and perhaps even eke out more optimized utilization benefits?
Hoff: There's not a whole lot of difference in terms of how you might apply the same query to non-virtualized infrastructure. It's not a monolithic single-time event, but, as I alluded to in a previous answer, the next extension should be evolution along the continuum. That notion of real-time infrastructure really does take in the concept of a lot of tasks.
Today, we are quite operationally inefficient in doing that, both from the perspective of practice and infrastructure utilization, and really making sure that our infrastructure, and the compute and storage, and all of the things that go into, up in our infrastructure become much more efficient, for power, cost efficiency, utility, and flexibility.
When you unite all of those capabilities, what it's going to mean going forward is a much more rich methodology and model for taking business process and instantiating that as an expression of policy within your infrastructure. So, you can say the things that are most important to your business are these processes, and these services.
What you need to be able to do, and ultimately what it means to automation and the efficiency problems, is that the infrastructure needs to self-govern, self-provision and re-provision. You need to be to able to allocate cost back to your constituents, and it gets closer and closer to becoming a loose, but federated, group of services. It can essentially play and interact in real-time to service the needs of the business.
All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.
It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Gardner: In the way that you were describing this continuum, it almost sounds like you were alluding to cloud computing, as it's being defined more and more -- and perhaps the “private cloud,” where people would be managing their internal enterprise IT resources from a cloud perspective. Am I overstating it?
Hoff: No, I don't think you're overstating it. I think that's a reasonable assertion and assumption based on what I am saying. The difficulty in using the "cloud" word is that it means a lot of things to lots of people. I think you brought up three definitions in your one sentence.
But the notion of being able to essentially utilize our resources pretty much anywhere, regardless of who owns the infrastructure, is something that's enticing and brings up a host of wonderful issues that make security people like me itchy.
If you read Nicolas Carr's book The Big Switch, and you think about utility or grid computing or whatever you want to call it -- the notion of being able to better utilize my resources, balance that with security, and be very agile -- it's fun times ahead. You are absolutely right. I was alluding to the C-word, yes.
Gardner: Okay. Charu at VMware, given that organizations are at different rates of adoption around virtualization -- some are just starting to test the waters -- but the end goal for some of these adopters could be this cloud-compute value, this fabric of IT value.
How are people getting started, and how should they get started in a way that sets them up for this longer-term payoff?
Chaubal: That's a very broad question, but I think it is important that you can go in and use virtualization to consolidate physical servers on to smaller number of physical servers, and you get that savings that way. If that's the approach you take, you might end up at a dead-end, or you might get off on a tangent somewhere.
What we find is that there is really a maturity curve when it comes to virtualization adoption, and one of the most important axes along that curve is, in a broad sense, your operational maturity.
When you are starting out, sure, go ahead and consolidate servers. That's a good way to get some quick wins, but you're rapidly going to come to a point where you need to start to imposing an operational discipline and policies and procedures that perhaps you didn't have before.
Perhaps you had them, but they weren't all that rigidly adhered to or weren't really followed all the time. The most important thing is that you start thinking about this operational maturity, and then go to things like being able to standardize upon processes and standardize upon the way things are configured.
Any kind of process you do, make sure it goes through the right steps in terms of getting it approved. There is a whole methodology around that, and that's one of the things that we spend a lot of time with our customers.
We have this graph where, if you can look at how many servers are virtualized over time, we would like to see a steady upward 45-degree angle to that curve. If somebody virtualizes too many too soon, you will see that curve shoot up sharply. Then, you repeat yourself, because you virtualized so much so quickly, and all these other issues that Chris alluded to come into play, and they might bog you down.
On the other hand, you could suffer the other extreme where you virtualize so slowly, that the curve is very shallow, and you end up leaving savings and benefits on the table, because you are just picking them up so slowly.
Gardner: Missed opportunities, right?
Chaubal: Right, exactly. The most important thing, when you are starting out, is to keep that in mind that you are not just installing a piece of software that will optimize what you have already. It's really a fundamental transformation in how you do things.
Gardner: Okay, let's take the last question to Anton at LogLogic. How do you recommend people get started, particularly in reaching this balance between wanting not to miss opportunities, wanting to be able to ramp up quickly and to enjoy the benefits that virtualization provide, but doing it in such a way that they get that visibility and analytics, and can set themselves up to be risk resistant, but also strategic in their outlook?
Chuvakin: I'll use the case that I just presented to illustrate the way to do it. As has happened with me in technology before virtualization, people will sometimes deploy it in a manner that's really makes auditing and monitoring pretty hard. So they have to go back and figure out what the technologies are doing in terms of transparency and visibility.
I suggest that, as people deploy VMware and other virtualization platforms, they instantly connect those to their log-management tools, and that log collection starts day one.
Admittedly, most of those organizations would not know what to do with those logs, but having those logs as a first step will be important. Even if you don't know how to analyze the log, you don't know what they mean, or what they're trying to tell you, you still have that repository to fall back to.
If you have to investigate an issue, an incident, or an operational issue in an environment, you still have an ability to go back and say, "Oh, something of that sort already happened to me once. Let's see what else occurred at the same time."
Even if you have no skills to delve into the full scope of how to analyze all these signals that virtual infrastructure is sending us, I would focus first on selecting the data and having the data for analysis. When you do that, your future steps or your further steps, when you make sense of the data, will be much more easy, much more transparent, and much more doable overall.
You will have to learn what the signals are, what information is being emitted by your virtual infrastructure, and then make conclusions on that. But, to even analyze the information, to make conclusions, and to figure out what's going on, you have to have the original data.
It's easier to collect the data early, because it's really not a big deal. You just send those logs to LogLogic or the log management system, and they are capable of doing that right away. Now, admittedly, you have to pick a system, such as LogLogic, that can support your virtualization infrastructure and then you can build up your analysis and your understanding and build up your true visibility, sort of the next layer of the intelligence as you go. Don't try to use the analysis right away, but start collecting it day one.
Gardner: Right, visibility early and often. I appreciate your input. We have been talking about virtualization -- how to do it right, how to enjoy lower risk, understanding security implications, but at the same time moving aggressively as you can, because they are significant economic benefits.
Helping us understand virtualization in this context, we have been joined by Charu Chaubal, senior architect in technical marketing at VMware. Thank you, sir.
Chaubal: Thank you.
Gardner: Also Chris Hoff, chief security analyst at Unisys. I really appreciate your input, Chris.
Hoff: Thanks, very much.
Gardner: And also, Dr. Anton Chuvakin, chief logging evangelist and also a security expert at LogLogic. Thank you, sir.
Chuvakin: Thank you so much for inviting me.
Gardner: I would like to thank our sponsor for this podcast, LogLogic. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the management and security challenges of virtualization. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about virtualization, and how to better improve management of virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation -- but without the pitfalls of complexity and mismanagement.
We're going to be talking about virtualization best practices with several folks who are dealing with this at several different levels. We're going to hearing from VMware, Unisys and LogLogic.
Let me introduce our panel today. First, we're joined by Charu Chaubal, senior architect for technical marketing, at VMware. Welcome, Charu.
Charu Chaubal: Thank you.
Gardner: We're also joined by Chris Hoff, chief security architect at Unisys. Hi, Chris.
Chris Hoff: Hi, how are you?
Gardner: Great. Also, Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic. Welcome to the show.
Dr. Anton Chuvakin: Hello. Thank you.
Gardner: Virtualization has certainly taken off, and this is nothing new to VMware. Organizations like Unisys are now doing quite a bit to help organizations that utilize, expand, and enjoy the benefits of virtualization. But virtualization needs to be done the correct way, without avoid pitfalls. If you do it too tactically, without allowing it to be part of an IT lifecycle and without management, then the fruits and benefits of virtualization can be largely lost.
Before we get into what virtualization can do, what to avoid, and how to better approach it, I'd like to just take a moment and try to determine why virtualization is really hot and taking off in the market now.
Let's start with Chris Hoff at Unisys. Some of these technologies have been around for many years. What is it about this point in time that is really making virtualization so hot?
Hoff: It's the confluence of quite a few things, and we see this sort of event happen in information technology (IT) quite often. You have the practically perfect storm of economics, technology, culture, and business coming together at one really interesting point in time.
The first thing that comes to mind is when people think about the benefits. The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.
These things are contributing a lot to the uptake, not to mention the advent of a lot of new technology in both hardware and software, which is starting to enable some of this to be more realistic in a business environment.
Gardner: Now over to VMware. Charu, tell us how deep and wide virtualization is emerging? It seems like people are using it in more and more ways, and in more and more places.
Chaubal: That's right. When the x86 virtualization first started out, maybe 10 years ago in a big way, it was largely being used in test and development types of environments. Over the last five years, it's definitely started to enter the production arena as well. We see more and more customers running even mission-critical applications on virtualization technologies.
Furthermore, we also see it across the board in terms of customer size, where everyone from the smallest customer to the very largest enterprises, are expanding further and further with their virtual environments.
Gardner: Let's go to LogLogic. Tell me, Anton, what sort of security and what sort of preventative measures are you helping your customers with in terms of gaining the visibility and the analytics about what's going on among these many moving parts? Many of these deployments are in now in an automated mode, more so than before they were virtualized. What are some of the issues that are you helping people deal with?
Chuvakin: You were exactly right about the visibility into the environments. As people deploy different types of IT infrastructure, first physical and now virtual, there is always a challenge of figuring out what happens with those PCs, at those PCs, which people are trying to connect to, or even attack them, and do all these at the same time around the clock.
Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.
As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.
Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.
In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.
Gardner: Now, Chris at Unisys, when you're dealing with customers, based on what we've heard about this expansion of virtualization, you're dealing with it on an applications level, and also on the infrastructure and server level.
What’s more, some folks are now getting into desktop virtualization infrastructure and delivering whole desktop interfaces out to end-user devices. This impacts not just a server. We're talking about network devices and storage devices. This is a bit more than a tactical issue. It really starts getting strategic pretty quickly.
Hoff: That's absolutely correct. If you really look at virtualization as an enabling technology or platform, as we can look out to the next three years of large companies use from the perspective of their strategic plans, you'll notice that there is a large trend toward what you might call "real-time infrastructure."
The notion here is about how you apply and take this enabling technology in the benefits of virtualization and leverage that to provide automation re-purposing. You have to deal with elements and issues that relate to charge-back for assets, as IT becomes more of a utility service.
If we look further out from there, we look at the governance issues of what it means to not really focus on hardware anymore, or even applications -- but on service and service levels. It gets a lot more strategic at times, played out all along the continuum.
While we focus virtualization on the notion of infrastructure and technology, what's really starting to happen now -- and what's important with the customers that we deal with -- is being able to unite both business process and business strategy, along with the infrastructure and the architecture that support it.
So we're a little excited and frothed up as it relates to all the benefits of virtualization today, and the bigger picture is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.
Gardner: Now, Charu at VMware, you obviously are building out what you consider the premier platform and approach to virtualization technically. You've heard, obviously, the opportunity for professional services and methodologies for approaching this, and you have third parties like LogLogic that are trying to provide better visibility across many different systems and devices.
How are you using this information in terms of what you bring to the management table for folks who are moving from, say, tactical to more strategic use of virtualization?
Chaubal: A lot of customers are expanding their virtualization so much now, to the point where they're hitting some interesting challenges that they maybe wouldn't have hit before. One great example is around compliance, such as Payment Card Industry Data Security Standards (PCI) compliance. There are a lot of questions right now around virtualizing those systems that process credit card holder data.
Chaubal: They're asking, "If I do this, am I going to be compliant with PCI? Is this something that's a realistic possibility? If it is, how do I go about demonstrating this to an auditor?"
This is where partners like LogLogic come into play, because they have the tools that can help achieve this. We believe that VMware provides a compliance-ready type of platform, so it is something you can achieve compliance with. But, in order to demonstrate and maintain that compliance, it's useful to have these tools from partners that can help you do that.
Gardner: Now, Anton at LogLogic, you're able to examine a number of different systems, gather information, correlate that information, do analytics, and provide a picture of what should be happening. Or, when something is not happening, you can look for the reasons why and look for aberrant or unusual behavior. So let's address security a little bit.
What are some of the challenges in terms of security when you move from a physical environment for compute power and resources to a virtualized environment? Then second, what about the mixture? It is obviously going to be both physical and virtualized instances of infrastructure and applications. Tell us about the security implications.
Chuvakin: I just follow the same logic I used for our recent webcast about virtualization security. In this webcast, I basically presented a full view of things that are the same and that are different in virtualized environments. I'll use the same structure, because some people who get too frothy, as Greg put it, about virtualization just stick to "virtualization changes everything." That is used sometimes as an excuse to not do things that you should continue doing in a virtualized environment.
Let's start from what things are the same. When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.
They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.
Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.
The definition of "who" didn't change. It's still a user, but what and where definitely did change. I mean, if it was done on a certain server, in virtual environment it might not be a server -- it might be a virtual image, which adds additional complexities
There are also new things that just don't have any occurrence in the physical environment -- for example, a rogue VM, a VM that is built by somebody who is not authorized to run VMs. It might be the end user who actually has his own little mini infrastructure. It brings up all sorts of forensic challenges that you have now solved. You don't just investigate a machine. You investigate a machine with a virtual platform, with another server on top, or another desktop on top.
This is my view of things that are the same that you should continue doing and things that are new that you should start learning how to audit and how to analyze the activity in the virtual environments, as well as how to do forensics, if what you have is a machine with potential a rogue VM.
Gardner: How about you, Chris at Unisys, how do you view implications for security and risk mitigation when it comes to moving increasingly into virtualized environments?
Hoff: I have to take a pretty pragmatic approach. The reality is that there are three conversations and three separate questions that need to be addressed, when you're talking about security in virtualized environments.
Unfortunately, what usually happens is that all three of them are combined into one giant question, which tends to lead to more confusion. So I like to separate the virtualization and security questions into three parts.
One of them is securing virtualization, and understanding what the impacts are on your architecture, your infrastructure, and your business process and models, when you introduce this new virtualization layer. That's really about securing the underlying virtualization platforms and understanding what happens and what changes when you introduce that, assuming that you have a decent understanding of what that means, and how that will ultimately flow down operationally.
The second point or question to address is one of virtualizing security, which is actually the operational element of, "What does it mean, and how do I go about taking what I might do in the physical world, and replicate that and/or even improve it in the virtual world?"
That's an interesting question, assuming that you have a good understanding of architecture and things that matter most to you, and how you might protect them, or how you might not be doing that. You may find several gaps today in your ability to actually do what you do in the physical world.
The third element is security through virtualization, which is okay, assuming that I have a good architectural blueprint and that I understand the impacts, the models, who and what changes operationally, how I have to go about securing things, and what benefits I get out of virtualization.
How do I actually improve my security posture by using these platforms and this technology? If you look at that, if you look at it in that way, you really are able to start dealing with the issues associated with each category. You could probably guess that if you mixed all three of them up, you could go down one path, and very easily be distracted by another.
When we break out the conversations with customers like that, it always comes back to a very basic premise that we seem to have forgotten in our industry. Despite all the technology, despite all the tools, and all of the things that go blinky-blink at night, the reality is that this comes down to being able to appropriately manage risk. That starts with understanding the things that matter to you most and using risk assessment frameworks and processes.
In a gross analogy, when you go to a grocery store and you take time to pack your frozen goods in one bag, and your canned goods and your soft goods in other bags, you use this compartmentalization, understanding what the impact is of all of the wonderful mobility, balanced with compliance and security needs.
If you got home, and you've got canned goods in with your fruit, the reality is that you've not done a good job of compartmentalizing and understanding what the impact of one good might have on the other.
The same thing applies in the virtual world. If you don't take the time to go back to the basics, understanding the impact of the infrastructure and the changes -- you're going to be a world of hurt later, even if you get the cost benefits and all the wonderful agility and mobility.
We really approach it pragmatically in a rational manner, such that people understand both the pluses, the pros and the cons of virtualization in their environments.
Gardner: We've determined that virtualization is quite hot. It's ramping up quickly. A number of studies have shown a 50-70 percent increase in the use of virtualization in the last few years. Projections continue for very fast-paced growth.
We also see a number of organizations using multiple vendors, when it comes to virtualization. We've also discussed how security and complexity apply to this, and that you need a comprehensive or contextual view of what's going on with your systems -- particularly if you have a mixture of physical and virtual.
Let's look at some examples of how this has been mitigated, how the risk has actually been decreased, and how the fruits, if you will, of virtualization are enjoyed without the pitfalls.
Let's first go to Charu at VMware. Can you offer some examples of how people have used virtualization, done it the right away, avoided some of these pitfalls, and have gained the visibility and analytics and therefore helped with their matured approach to virtualization?
Chaubal: One thing we've done at VMware over the last year and a half is try to provide as much prescriptive guidance as we can. So a lot of securing of virtualization comes down to making sure you actually deploy it [properly].
So, one thing that we've done is created hardening guides that really aim to show customers how this can be done. That's proved to be very popular among our customers.
Not to get into too much detail, but one of the main issues is the fact that you have a virtualization layer that typically has a management interface in it. Then, you have the interface that goes into your virtual machines. People need to understand that this management layer needs to be completely separated from the actual production network.
That principle is manifested in different recommendations and scenarios, when you plan a deployment and configure it. That's just one example where customers have been able to make use of our prospective guidance. Then, they architect something that is actually much more secure than possibly they would have with some preconceived notions that they might have had. I think that's one area where we are seeing success.
Gardner: Let's go to LogLogic. Anton, give us some examples, actual companies or at least use-case scenarios, where the use of LogLogic, or the methodologies that it supports, have brought to bear on virtualization – to lower the cost, increased performance, gain higher utilization, and so forth -- but without some of these risks.
Chuvakin: I'll give an example of a retail company that was using LogLogic for compliance, as well for operational usage, such as troubleshooting their servers. This company, in a separate project, was implementing virtualization to convert some of their infrastructure to a virtual machine.
At some point, those two projects mainly had their log management to track operations to satisfy PCI requirements. These issues collided with the virtualization projects, and the company realized that they now have to not just collect logs from the physical infrastructure, but also from the virtual side that is now being built.
What happened was that the logs from the virtual infrastructure were also streamed into LogLogic. Now, LogLogic has the ability to collect any type of a log. In this case, we did use that capability to collect the log, which were at the time not even supported or analyzed by LogLogic.
The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."
We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.
In this case, it was actually a great success story because we used part of the LogLogic infrastructure that doesn't rely on any preconceived notions of what the logs are. Then, they built up on top of that to help them pinpoint the issues with their VMs to see who accesses the platforms, what applications people use to manage the environment, and, basically, to track all sorts of interest in events in their virtual infrastructure.
I have to admit that it wasn't really tested on their PCI yet, but I'm pretty confident that their PCI auditors will accept what they did for the virtual environment. And, they would satisfy the requirements of PCI, which calls for logging and monitoring, as well as the requirements in the compliance mandate.
At the same time, while they are building it for that use, their analysts are already trying to do searches and look certain things that might be out of order in their VM environment. An operational use-case spontaneously emerged, and now they not only have their own idea for what to look for, but also our content to do that.
Gardner: You bring up a point here that we shouldn't overlook. This isn't something that you just build and walk away from. It requires ongoing refinement tuning. The dynamic nature of virtualization, while perhaps automated in terms of allocating resources, is an overall process that needs to be managed in order for these business outcomes to be enjoyed.
Let's go back to Chris at Unisys. Tell us about the ongoing nature of virtualization. How do you keep on top of it? How do you keep it performing well, and perhaps even eke out more optimized utilization benefits?
Hoff: There's not a whole lot of difference in terms of how you might apply the same query to non-virtualized infrastructure. It's not a monolithic single-time event, but, as I alluded to in a previous answer, the next extension should be evolution along the continuum. That notion of real-time infrastructure really does take in the concept of a lot of tasks.
Today, we are quite operationally inefficient in doing that, both from the perspective of practice and infrastructure utilization, and really making sure that our infrastructure, and the compute and storage, and all of the things that go into, up in our infrastructure become much more efficient, for power, cost efficiency, utility, and flexibility.
When you unite all of those capabilities, what it's going to mean going forward is a much more rich methodology and model for taking business process and instantiating that as an expression of policy within your infrastructure. So, you can say the things that are most important to your business are these processes, and these services.
What you need to be able to do, and ultimately what it means to automation and the efficiency problems, is that the infrastructure needs to self-govern, self-provision and re-provision. You need to be to able to allocate cost back to your constituents, and it gets closer and closer to becoming a loose, but federated, group of services. It can essentially play and interact in real-time to service the needs of the business.
All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.
It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Gardner: In the way that you were describing this continuum, it almost sounds like you were alluding to cloud computing, as it's being defined more and more -- and perhaps the “private cloud,” where people would be managing their internal enterprise IT resources from a cloud perspective. Am I overstating it?
Hoff: No, I don't think you're overstating it. I think that's a reasonable assertion and assumption based on what I am saying. The difficulty in using the "cloud" word is that it means a lot of things to lots of people. I think you brought up three definitions in your one sentence.
But the notion of being able to essentially utilize our resources pretty much anywhere, regardless of who owns the infrastructure, is something that's enticing and brings up a host of wonderful issues that make security people like me itchy.
If you read Nicolas Carr's book The Big Switch, and you think about utility or grid computing or whatever you want to call it -- the notion of being able to better utilize my resources, balance that with security, and be very agile -- it's fun times ahead. You are absolutely right. I was alluding to the C-word, yes.
Gardner: Okay. Charu at VMware, given that organizations are at different rates of adoption around virtualization -- some are just starting to test the waters -- but the end goal for some of these adopters could be this cloud-compute value, this fabric of IT value.
How are people getting started, and how should they get started in a way that sets them up for this longer-term payoff?
Chaubal: That's a very broad question, but I think it is important that you can go in and use virtualization to consolidate physical servers on to smaller number of physical servers, and you get that savings that way. If that's the approach you take, you might end up at a dead-end, or you might get off on a tangent somewhere.
What we find is that there is really a maturity curve when it comes to virtualization adoption, and one of the most important axes along that curve is, in a broad sense, your operational maturity.
When you are starting out, sure, go ahead and consolidate servers. That's a good way to get some quick wins, but you're rapidly going to come to a point where you need to start to imposing an operational discipline and policies and procedures that perhaps you didn't have before.
Perhaps you had them, but they weren't all that rigidly adhered to or weren't really followed all the time. The most important thing is that you start thinking about this operational maturity, and then go to things like being able to standardize upon processes and standardize upon the way things are configured.
Any kind of process you do, make sure it goes through the right steps in terms of getting it approved. There is a whole methodology around that, and that's one of the things that we spend a lot of time with our customers.
We have this graph where, if you can look at how many servers are virtualized over time, we would like to see a steady upward 45-degree angle to that curve. If somebody virtualizes too many too soon, you will see that curve shoot up sharply. Then, you repeat yourself, because you virtualized so much so quickly, and all these other issues that Chris alluded to come into play, and they might bog you down.
On the other hand, you could suffer the other extreme where you virtualize so slowly, that the curve is very shallow, and you end up leaving savings and benefits on the table, because you are just picking them up so slowly.
Gardner: Missed opportunities, right?
Chaubal: Right, exactly. The most important thing, when you are starting out, is to keep that in mind that you are not just installing a piece of software that will optimize what you have already. It's really a fundamental transformation in how you do things.
Gardner: Okay, let's take the last question to Anton at LogLogic. How do you recommend people get started, particularly in reaching this balance between wanting not to miss opportunities, wanting to be able to ramp up quickly and to enjoy the benefits that virtualization provide, but doing it in such a way that they get that visibility and analytics, and can set themselves up to be risk resistant, but also strategic in their outlook?
Chuvakin: I'll use the case that I just presented to illustrate the way to do it. As has happened with me in technology before virtualization, people will sometimes deploy it in a manner that's really makes auditing and monitoring pretty hard. So they have to go back and figure out what the technologies are doing in terms of transparency and visibility.
I suggest that, as people deploy VMware and other virtualization platforms, they instantly connect those to their log-management tools, and that log collection starts day one.
Admittedly, most of those organizations would not know what to do with those logs, but having those logs as a first step will be important. Even if you don't know how to analyze the log, you don't know what they mean, or what they're trying to tell you, you still have that repository to fall back to.
If you have to investigate an issue, an incident, or an operational issue in an environment, you still have an ability to go back and say, "Oh, something of that sort already happened to me once. Let's see what else occurred at the same time."
Even if you have no skills to delve into the full scope of how to analyze all these signals that virtual infrastructure is sending us, I would focus first on selecting the data and having the data for analysis. When you do that, your future steps or your further steps, when you make sense of the data, will be much more easy, much more transparent, and much more doable overall.
You will have to learn what the signals are, what information is being emitted by your virtual infrastructure, and then make conclusions on that. But, to even analyze the information, to make conclusions, and to figure out what's going on, you have to have the original data.
It's easier to collect the data early, because it's really not a big deal. You just send those logs to LogLogic or the log management system, and they are capable of doing that right away. Now, admittedly, you have to pick a system, such as LogLogic, that can support your virtualization infrastructure and then you can build up your analysis and your understanding and build up your true visibility, sort of the next layer of the intelligence as you go. Don't try to use the analysis right away, but start collecting it day one.
Gardner: Right, visibility early and often. I appreciate your input. We have been talking about virtualization -- how to do it right, how to enjoy lower risk, understanding security implications, but at the same time moving aggressively as you can, because they are significant economic benefits.
Helping us understand virtualization in this context, we have been joined by Charu Chaubal, senior architect in technical marketing at VMware. Thank you, sir.
Chaubal: Thank you.
Gardner: Also Chris Hoff, chief security analyst at Unisys. I really appreciate your input, Chris.
Hoff: Thanks, very much.
Gardner: And also, Dr. Anton Chuvakin, chief logging evangelist and also a security expert at LogLogic. Thank you, sir.
Chuvakin: Thank you so much for inviting me.
Gardner: I would like to thank our sponsor for this podcast, LogLogic. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the management and security challenges of virtualization. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
enterprise architecture,
Green IT,
loglogic,
software,
Unisys,
virtualization,
VMware
Oracle and HP Explain History, Role and Future for New Exadata Server and Database Machine
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco the week of Sept. 22, 2008.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect Podcast recorded at the Oracle OpenWorld Conference in San Francisco. We are here the week of Sept. 22, 2008. This HP Live! Podcast is sponsored by Hewlett-Packard, and distributed through the BriefingsDirect Network.
Today we are going to discuss a large and an impactful product announcement at Oracle OpenWorld that took place on Sept. 24. It was the introduction of appliances in a cooperative relationship between HP and Oracle to create some of the most high performing databases and date warehouses in history. We are going to talking about the Oracle Exadata Storage Server and -- when put together in a very impressive configuration -- what becomes the HP Oracle Database Machine.
Here to help us understand how these impressive server configurations and high-speed, extreme-performance databases came together, we are joined by Rich Palmer, the director of technology and strategy for industry standard servers at HP. We are also joined by Willie Hardie, vice president of Oracle database product marketing. Welcome to the show, Willie.
Willie Hardie: Good to be here, Dana.
Gardner: Tell me a little bit about this very momentous announcement. This has been several years in the making, but it’s not just a product announcement. It seems like an architectural shift, and also an alliance and partnership shift in terms of the cooperation between a hardware provider, in this case HP, and Oracle, until now purely a software company.
Hardie: That’s an excellent question. So what we actually announced this week is the Oracle Exadata Storage Server. Now, the Oracle Exadata Storage Server is an intelligent storage device. We’ve basically taken industry standard hardware and storage components from HP, and we’ve combined that with smart intelligence software from Oracle that allows us to offload query processing from the database servers to the storage servers.
So now they can do a lot of the work for us, to allow the stripping off of the rows and columns that we require, and push last data backups through much wider networks.
Gardner: For those of us who are not computer scientists, but are nonetheless interested in the outcomes, architecturally we are putting the intelligence that we usually have in a database server in very close proximity to the data storage itself, connecting that through a very fat pipe in the form of InfiniBand. And, in essence, parallel processing comes to bear, because of the proximity. Is that correct?
Hardie: Absolutely. So what we are able to do for the first time ever is we can use these storage devices to actually do the query processing itself. So the more that the storage server processes and we compute into our configuration, the more of the workload they can take off, which traditionally is done at the database server.
Gardner: Let’s go to Rich Palmer at HP. Tell us a little bit about the history. How did this come about, and what is it that HP has been doing to improve upon the performance of this long-term database lineage?
Rich Palmer: If you look at HP and Oracle as partners in this industry, we have a long-standing history together. We have several reference configurations, more than 50 reference configurations that we do with industry standard hardware and Oracle solutions, which we’ve been delivering for many years now.
Going back all the way to the introduction of Oracle Real Application Clusters (RAC), and even before RAC introductions, the history of the two companies really stems from two leadership positions. HP does more servers on Oracle than any other company. Oracle does more data warehouses than any other company. You bring those two forces together, and you get a very strong formidable entry into this data warehouse appliance market.
Where HP and Oracle really started this discussion stems back a couple of years, and it really became a trend in the market of bringing data and server processing power closer together; that trend has escalated over the last couple of years -- especially as so much data has been growing at exponential rates, every single year. What we found is that, you cannot push so much data over a traditional storage fabric. This new technology allows us to do that.
Gardner: And we are talking about very large data sets, of terabytes and larger, right?
Palmer: Enormous Data sets. Let me give you an example, and I think we are all very familiar with this example. We all use cell phones in today’s industry. Every one of those cell phone calls is a database record somewhere, be it on AT&T’s database or T-Mobile’s database or whomever's database -- they store that data. Now, when they are storing that data, sometimes they are going to want to move it. If you have a narrow pipe to push that data down, and you’re bringing back enormous amounts of data that is erroneous, and you don’t need the other data; all you need is just for what you’re looking for in the query.
So this process allows us to push just the query information across that pipe. Less data over the pipe, a wider pipe, and your performance goes up dramatically.
Gardner: Okay, so let’s unpack this a little bit. We’ve established that the marketplace is demanding better performance, particularly in the use of large data sets, 1 terabyte and larger up to 10 terabytes, and size often. That requires the movement of very large sets of data, and the inhibitor here was the storage’s physical capacity, and ability to deliver the data.
So you’ve re-architected, and we brought together two companies to work together. This brings the question: Why hasn’t the hardware and software duality gotten closer before this? Why now?
Palmer: In this market, it’s constantly evolving to a state where you have to bring software tools to the table, and you have to bring high-performance hardware to the table. The evolution of both of those have hit at the perfect time in the last year.
Oracle has been developing the software code for several years now, and HP has been working on the hardware side of this equation to bring together the two forces at this time. We are using industry standard technology, so it’s not something that we are the only hardware guys out there with InfiniBand, and InfiniBand is an evolving technology. But the performance of InfiniBand is at a point now where we can actually leverage it using Oracle software to offload the storage processing from the database server. Those are the two key components -- it’s not just the hardware, and it’s not just the software. You have to marry these two things together.
So why hasn’t it been done in the past? Well, it has to some degree, there are others who had tried to do this, but they haven’t done both. They haven’t been able to achieve both facets, and that’s really why this is the right product at the right time.
Gardner: Okay, Willie, let’s get into the actual product itself. Explain to me what the Oracle Exadata Storage Server actually is? What are we talking about?
Hardie: You see that the Oracle Exadata Storage Server is basically comprised of an industry standard HP DL180 Storage Server. So inside this storage server we have 12 3.5-inch disks to be 12 SATA drives. We have two Intel quad-core processors. We have 8 gigabytes of memory; we have two InfiniBand network connections, and dual power supplies.
So in this storage server we have a lot of storage capacity, we have a lot of processing power, and we have a lot of network bandwidth. Then the real secret sauce here is this intelligence software from Oracle that’s installed into each and every one of those devices. It’s this intelligent software that enables us to offload this query processing, which makes the Oracle Exadata Storage Server really unique.
Gardner: Okay, let's dumb this down a little bit in simplistic terms. Instead of large data sets moving from storage to the database and back, what happens differently now?
Hardie: What happens differently now is, because we are offloading the query processing at the storage server, the storage server can strip out the columns that we don’t need, strips out the rows we don’t need, returns a subset of data back up through this wide InfiniBand network. That’s what makes the difference. We are treating a much smaller data set that we pass up through this network, and the database server can just finish off that query processing much faster than it ever could previously.
Palmer: One of the other values that we achieve here is certainly in the data passing back and forth, or less data over a wider pipe. So you’re going to get exponentially better performance. Now at the storage servers you’ve taken the processing power of doing the query right at the disks, and in every one of these storage servers you have eight cores, these are Intel quad core processors, two of them in each servers, and so you have eight cores on the input/output (I/O) path directly to the disk.
So there is no external I/O going to your disks. Traditionally you’ve had to go outside of the server, go to the disk that is across the fabric -- and everyone else is sharing that fabric.
So you have many people sharing a fabric, versus now you have a dedicated fabric inside of the server. So it’s a copper-to-copper connection inside the server. Those disks are right on top of the processor. That is really the essence of it -- you can pull the data off of this rapidly because it’s all so much faster. As Willie indicated, you can strip out all the unnecessary data and pass a much smaller data set over a much wider pipe, back to your database servers. There are so many levels of performance improvement here.
Gardner: And to your point on the secret sauce -- you are also taking advantage of all those cores via multiple threads, and the software has been a deeply tuned to take advantage of those multiple threads in a concurrent fashion.
Hardie: Oh, absolutely, and Rich touched on that as well.
Palmer: When we add more Exadata Storage Servers into our configuration we can take advantage, not just that additional storage capacity, but we can now take advantage of that additional processing capability -- to own that storage layer, which is a big, big difference.
Gardner: And at the announcement here, Oracle Chairman and CEO Larry Ellison described use cases where improvement typically was 10x to up to 72x over what has been the industry benchmark.
Hardie: Absolutely, when you actually cut away the technology and look at this from a business perspective, what it means for me as a business user -- it means that when you’re accessing those data warehouses that Rich was talking about earlier -- like a call data record -- data warehouse have billions of rules additionally. What this means, when you’re accessing those, your queries are going to run much, much faster than they ever did previously. Not only will they run faster, you can have much more queries and more long-running queries concurrently. That’s what is going to be making the big difference.
So when we hear of customers talking about getting 20x performance, improving 30x performance in one particular instance; in one particular query, 72x performance -- that is extreme performance improvements, in anybody’s measurement.
Gardner: Okay, so we have this engine, this Oracle Exadata Storage Server. We also a new announcement, the HP Oracle Database Machine. Tell me how one relates to the other.
Palmer: The HP Oracle Database Machine is a single rack that contains everything you need to run a large data warehouse. It contains eight ProLiant servers running Oracle Database 11g and RAC. It has four InfiniBand network switches and it has 14 of these Oracle Exadata Storage Servers that we talked about earlier. So in a single unit you have everything you need, ready to load up your data and start running your business queries right away.
Gardner: Tell us a little bit, Rich, about this 42-slot rack configuration and why it’s right for the market now?
Palmer: Well, so if you look at the market in data warehousing, the appliance type of delivery is a much simpler deployment of hardware and software configurations. That is emerging as a high-growth area in data warehousing. So with this market trend that’s going on between HP and Oracle, we’ve been able to come together and put everything in customers’ needs in one box. We put it at the customer’s site, and that’s on a global basis.
If you look at HP, one of the strengths that HP brings to this relationship is our ability to distribute and deliver globally. We build all of these database servers or database machines in regions around the globe. They are not just built here in the United States; they are built in United States, they are built in Singapore, they are built in Scotland, and then they are delivered to those regions on a worldwide basis.
So this ability of HP to build the product from the ground up to an exact specification, deliver to the customer, install at to customer's site, and then have Oracle come in and tune the software to make sure it's optimally configured -- that is a no-lose environment. We have the ability here to deliver an appliance-like stack of hardware, put the right software set on that hardware, and target a customer's need for simplicity, high performance, and data reliability -- all in one box.
Gardner: Okay, we've described the marketplace need, the size of data pushing the envelope. Now we are re-architecting to adjust to that. We've described the subset, which is the Exadata Server, and then the configuration, which is the racked Machine. Now, what kind of organizations are going to be interested in having the forklift upgrade to this, bring it right in, drop it in, pre-configured, optimized, and what are they going to do with it? Is this for business intelligence (BI), is this for simply managing scale? What are the speeds that this now provides going to do for companies to improve, or to change, how they do business?
Hardie: The organizations that are going to be interested in Oracle Exadata Storage Server and the HP Oracle Database Machine are those primarily interested in large data warehouses. And by large data warehouses we're talking into the (terabytes and petabytes) and beyond. Now if you look at the organizations that are typically dependent on very large data warehouses, it's organizations that Rich mentioned earlier, the telcos could be an obvious one, call data records, retail organization, very much dependent on analyzing point of sales (POS) transactions. You look at other organizations like trading systems, massive amount of transactions flow through these systems on a daily basis.
Gardner: Especially these days.
Hardie: Absolutely. It is really important to understand what's going on with these transactions, and to make informed business decisions. The beauty of this is you have completely scalable infrastructure from a storage point of view. But more importantly, you've got completely scalable infrastructure from a query performance point of view. As you store more call data records into these systems, more POS transactions, more stock transactions into these systems, you're not going to deteriorate your query performance at all. The more hardware, the more storage servers you put into these systems, the better your performance is going to be.
Gardner: Now that I have this capability to bang on this thing, so to speak, in more ways without the degrading performance, in what ways do you expect these companies to actually "bang" on this? Is this going to provide new and higher level of business intelligence querying? Is this going to provide higher-order analytics? Are there going to more business applications that can derive near real-time data and analytics from this? All of the above? What's the qualitative payback?
Hardie: There is definitely an element of "all of the above." Let me give you some of the examples of some of the queries that customers have actually been experiencing using the Oracle Exadata Storage Server. This probably fits into the context pretty well. You have organizations out there, retail organizations, telcos, for example. You know, some of the queries they are running are literally running for over half an hour. In some cases it is hours.
Moving to this new architecture is bringing down these execution times. One particular example, a query that was running for over 30 minutes is now running in under 30 seconds. It's that scale of improvement. Now when you can set your terminal, your laptop, or your mobile device and then kick off a query and get an answer within seconds -- then you're going to do more of these. If you know that when you kick off a query it is going to take 30 seconds to return it, you're going to pick more times when you choose to kick that off. You don't have to worry timing that anymore. You can just ask queries when you like, and expect to get a quick answer.
Palmer: Willie, I think you are absolutely right. The ability to capture business information has accelerated so much because of this technology. There are customers that cannot access data records beyond a certain time period simply because of the massive size of those data records, or because of how long a query would take to access a historical group of data. That all goes away now.
Now you have the ability. Historically you might have been able to look at the last week's worth of retail records, or medical records. Now you have the ability to go and look at years and years of data in the same timeframe that you were looking at weeks of data, and query a much bigger dataset, because of this architecture. That's a big business value, because now I can trend my business in a much more effective way. I'm putting more productivity tools in the hands of the user, so that they can actually turn data queries and business intelligence back into a fundamental element of growing their business and being more competitive in their markets.
Gardner: I imagine this will also compel companies to put even more data and information into these warehouses, because they are not going to degrade the performance of these essential queries. They are also going to able to do more types of queries. And, again, we're improving the quality and breadth of the data types, but still getting even better performance. So it's sort of a qualitative improvement on many different dimensions.
Hardie: It's a qualitative improvement, and it's a quantitative. I mean, you're absolutely right. Organizations today are more and more dependent on faster access to better information. It's just as simple as that.
Gardner: We've talked about the types of organizations that we'll use this now in its current configuration. I expected this re-architecting of the database and the storage will also move down market a bit. What possible other use-case scenarios do you envision for leveraging this technology beyond the high-end of the market, into other areas of the market?
Palmer: If you look at some of the growing and emerging markets today, just think of cloud computing and all of the massive amounts of data that we're storing in other locations on the Internet, or through a paid service, and the massive amounts of storage that's being deployed for those types of applications. That's not going to slow down at all. This allows us through the Database Machine to go in and drop in a configured environment for that workload, specifically dedicated to a workload.
You can now scale this product by connecting multiple racks together, you can now scale just the storage component, if the processing side of the database environment is sufficient. You can now just scale the storage nodes, so it is a scalable grid architecture that can grow on the fly. So cloud computing is a very good example where we really don't know what the upper limit of that storage is going to be. So deploy a configuration, say, on a HP Oracle Database Machine and then grow it as your needs grow. This is one application where we know this is going to succeed.
Gardner: Willie, we're also aware that organizations will just want the Oracle Exadata Storage Server. They might have their own environments, their own preference for configuring what's available to them, and what would become available to them in the future.
Hardie: Any organization that wants to run their data warehouse on the Oracle Exadata Storage Server -- all they have to do is buy the Oracle Exadata Storage Server. It's just as simple as that. Oracle and HP of long given customers a choice of configurable options. So if customer feels that something like HP Oracle Database Machine is not the right fit for their organization, if it does not fit the standard needs for their organization, then they have the option of buying the individual components, the Oracle Exadata Storage Server, the InfiniBand connectors, connecting to the database servers, they have that option.
Gardner: Looking at this again through how to get started, where do organizations go? Now that this is available immediately, both of these configurations, is the sales happening through both HP and Oracle?
Palmer: It's a cooperative effort, but Oracle is leading the sales process. So the Oracle sales representatives on a global basis are leading this process, and HP is certainly as their partner going to join with them and make sure that the customer receives the best from both companies.
Gardner: HP is going to service the hardware, but the support comes through Oracle, is that correct?
Hardie: Oracle is the first point of contact if you want to buy an Oracle Exadata Storage Server, Oracle is your first point of contact. So talk to your local Oracle sales representatives. If you do decide to buy one, and you want to resolve a support issue, you call Oracle, and Oracle will bring in HP as and when required to resolve any issues.
Gardner: To sum up a little bit, for those folks who perhaps are a few steps removed from the IT department, who are doing queries, or using business applications, what's the big take away for them? What about this announcement is going to change their world?
Hardie: For these types of users you just mentioned, a little bit or a couple of steps removed from the IT department ... To be quite honest, they don't really care what their systems run on. What they are interested in is getting fast answers to their business queries. It's just simple as that. So when these business users know that they can get instantaneous response times, they can get real extreme performance of their date warehouse, or of their business intelligence applications -- that's what's going to make a big difference for them.
Gardner: Rich, at HP, let me flip the question to you. For those people inside the IT department, who want to come in Monday morning without big headaches, what is this new configuration and architectural approach mean for them?
Palmer: Simplicity, higher performance, the ability to increase their service level agreements (SLAs) with their customers in the warehousing world. This is a solution built on industry standard hardware, with Oracle software that is just well accepted in the industry as an enterprise software leader. The IT departments are very comfortable with both of those facts. They're very comfortable with HP; they're very comfortable with Oracle. Putting the two together is a natural event for any IT manager.
Gardner: We've been talking about a large and impactful announcement here at Oracle OpenWorld, the introduction of the Oracle Exadata Storage Server -- the first hardware product from Oracle. Isn't that right?
Hardie: Absolutely.
Gardner: We've also looked at the configuration of those Exadata servers into the HP Oracle Database Machine, which is in effect a data warehouse appliance. Joining us to help explain this, we have been happy to have Rich Palmer, director of technology and strategy in the industry standard servers group at HP. And also Willie Hardie, vice president of Oracle database product marketing. Thanks to you both.
Hardie: Thank you, Dana.
Palmer: Thank you very much, Dana.
Gardner: Our conversation comes to you today through a sponsored HP Live! Podcast from the Oracle OpenWorld Conference in San Francisco. Look for other podcasts from this HP Live! event series at hp.com, as well as via the BriefingsDirect Network. I'd like to thank our producers on today's show, Fred Bals and Kate Whalen.
I am Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise IT topics and strategies. Bye for now.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect Podcast recorded at the Oracle OpenWorld Conference in San Francisco. We are here the week of Sept. 22, 2008. This HP Live! Podcast is sponsored by Hewlett-Packard, and distributed through the BriefingsDirect Network.
Today we are going to discuss a large and an impactful product announcement at Oracle OpenWorld that took place on Sept. 24. It was the introduction of appliances in a cooperative relationship between HP and Oracle to create some of the most high performing databases and date warehouses in history. We are going to talking about the Oracle Exadata Storage Server and -- when put together in a very impressive configuration -- what becomes the HP Oracle Database Machine.
Here to help us understand how these impressive server configurations and high-speed, extreme-performance databases came together, we are joined by Rich Palmer, the director of technology and strategy for industry standard servers at HP. We are also joined by Willie Hardie, vice president of Oracle database product marketing. Welcome to the show, Willie.
Willie Hardie: Good to be here, Dana.
Gardner: Tell me a little bit about this very momentous announcement. This has been several years in the making, but it’s not just a product announcement. It seems like an architectural shift, and also an alliance and partnership shift in terms of the cooperation between a hardware provider, in this case HP, and Oracle, until now purely a software company.
Hardie: That’s an excellent question. So what we actually announced this week is the Oracle Exadata Storage Server. Now, the Oracle Exadata Storage Server is an intelligent storage device. We’ve basically taken industry standard hardware and storage components from HP, and we’ve combined that with smart intelligence software from Oracle that allows us to offload query processing from the database servers to the storage servers.
So now they can do a lot of the work for us, to allow the stripping off of the rows and columns that we require, and push last data backups through much wider networks.
Gardner: For those of us who are not computer scientists, but are nonetheless interested in the outcomes, architecturally we are putting the intelligence that we usually have in a database server in very close proximity to the data storage itself, connecting that through a very fat pipe in the form of InfiniBand. And, in essence, parallel processing comes to bear, because of the proximity. Is that correct?
Hardie: Absolutely. So what we are able to do for the first time ever is we can use these storage devices to actually do the query processing itself. So the more that the storage server processes and we compute into our configuration, the more of the workload they can take off, which traditionally is done at the database server.
Gardner: Let’s go to Rich Palmer at HP. Tell us a little bit about the history. How did this come about, and what is it that HP has been doing to improve upon the performance of this long-term database lineage?
Rich Palmer: If you look at HP and Oracle as partners in this industry, we have a long-standing history together. We have several reference configurations, more than 50 reference configurations that we do with industry standard hardware and Oracle solutions, which we’ve been delivering for many years now.
Going back all the way to the introduction of Oracle Real Application Clusters (RAC), and even before RAC introductions, the history of the two companies really stems from two leadership positions. HP does more servers on Oracle than any other company. Oracle does more data warehouses than any other company. You bring those two forces together, and you get a very strong formidable entry into this data warehouse appliance market.
Where HP and Oracle really started this discussion stems back a couple of years, and it really became a trend in the market of bringing data and server processing power closer together; that trend has escalated over the last couple of years -- especially as so much data has been growing at exponential rates, every single year. What we found is that, you cannot push so much data over a traditional storage fabric. This new technology allows us to do that.
Gardner: And we are talking about very large data sets, of terabytes and larger, right?
Palmer: Enormous Data sets. Let me give you an example, and I think we are all very familiar with this example. We all use cell phones in today’s industry. Every one of those cell phone calls is a database record somewhere, be it on AT&T’s database or T-Mobile’s database or whomever's database -- they store that data. Now, when they are storing that data, sometimes they are going to want to move it. If you have a narrow pipe to push that data down, and you’re bringing back enormous amounts of data that is erroneous, and you don’t need the other data; all you need is just for what you’re looking for in the query.
So this process allows us to push just the query information across that pipe. Less data over the pipe, a wider pipe, and your performance goes up dramatically.
Gardner: Okay, so let’s unpack this a little bit. We’ve established that the marketplace is demanding better performance, particularly in the use of large data sets, 1 terabyte and larger up to 10 terabytes, and size often. That requires the movement of very large sets of data, and the inhibitor here was the storage’s physical capacity, and ability to deliver the data.
So you’ve re-architected, and we brought together two companies to work together. This brings the question: Why hasn’t the hardware and software duality gotten closer before this? Why now?
Palmer: In this market, it’s constantly evolving to a state where you have to bring software tools to the table, and you have to bring high-performance hardware to the table. The evolution of both of those have hit at the perfect time in the last year.
Oracle has been developing the software code for several years now, and HP has been working on the hardware side of this equation to bring together the two forces at this time. We are using industry standard technology, so it’s not something that we are the only hardware guys out there with InfiniBand, and InfiniBand is an evolving technology. But the performance of InfiniBand is at a point now where we can actually leverage it using Oracle software to offload the storage processing from the database server. Those are the two key components -- it’s not just the hardware, and it’s not just the software. You have to marry these two things together.
So why hasn’t it been done in the past? Well, it has to some degree, there are others who had tried to do this, but they haven’t done both. They haven’t been able to achieve both facets, and that’s really why this is the right product at the right time.
Gardner: Okay, Willie, let’s get into the actual product itself. Explain to me what the Oracle Exadata Storage Server actually is? What are we talking about?
Hardie: You see that the Oracle Exadata Storage Server is basically comprised of an industry standard HP DL180 Storage Server. So inside this storage server we have 12 3.5-inch disks to be 12 SATA drives. We have two Intel quad-core processors. We have 8 gigabytes of memory; we have two InfiniBand network connections, and dual power supplies.
So in this storage server we have a lot of storage capacity, we have a lot of processing power, and we have a lot of network bandwidth. Then the real secret sauce here is this intelligence software from Oracle that’s installed into each and every one of those devices. It’s this intelligent software that enables us to offload this query processing, which makes the Oracle Exadata Storage Server really unique.
Gardner: Okay, let's dumb this down a little bit in simplistic terms. Instead of large data sets moving from storage to the database and back, what happens differently now?
Hardie: What happens differently now is, because we are offloading the query processing at the storage server, the storage server can strip out the columns that we don’t need, strips out the rows we don’t need, returns a subset of data back up through this wide InfiniBand network. That’s what makes the difference. We are treating a much smaller data set that we pass up through this network, and the database server can just finish off that query processing much faster than it ever could previously.
Palmer: One of the other values that we achieve here is certainly in the data passing back and forth, or less data over a wider pipe. So you’re going to get exponentially better performance. Now at the storage servers you’ve taken the processing power of doing the query right at the disks, and in every one of these storage servers you have eight cores, these are Intel quad core processors, two of them in each servers, and so you have eight cores on the input/output (I/O) path directly to the disk.
So there is no external I/O going to your disks. Traditionally you’ve had to go outside of the server, go to the disk that is across the fabric -- and everyone else is sharing that fabric.
So you have many people sharing a fabric, versus now you have a dedicated fabric inside of the server. So it’s a copper-to-copper connection inside the server. Those disks are right on top of the processor. That is really the essence of it -- you can pull the data off of this rapidly because it’s all so much faster. As Willie indicated, you can strip out all the unnecessary data and pass a much smaller data set over a much wider pipe, back to your database servers. There are so many levels of performance improvement here.
Gardner: And to your point on the secret sauce -- you are also taking advantage of all those cores via multiple threads, and the software has been a deeply tuned to take advantage of those multiple threads in a concurrent fashion.
Hardie: Oh, absolutely, and Rich touched on that as well.
Palmer: When we add more Exadata Storage Servers into our configuration we can take advantage, not just that additional storage capacity, but we can now take advantage of that additional processing capability -- to own that storage layer, which is a big, big difference.
Gardner: And at the announcement here, Oracle Chairman and CEO Larry Ellison described use cases where improvement typically was 10x to up to 72x over what has been the industry benchmark.
Hardie: Absolutely, when you actually cut away the technology and look at this from a business perspective, what it means for me as a business user -- it means that when you’re accessing those data warehouses that Rich was talking about earlier -- like a call data record -- data warehouse have billions of rules additionally. What this means, when you’re accessing those, your queries are going to run much, much faster than they ever did previously. Not only will they run faster, you can have much more queries and more long-running queries concurrently. That’s what is going to be making the big difference.
So when we hear of customers talking about getting 20x performance, improving 30x performance in one particular instance; in one particular query, 72x performance -- that is extreme performance improvements, in anybody’s measurement.
Gardner: Okay, so we have this engine, this Oracle Exadata Storage Server. We also a new announcement, the HP Oracle Database Machine. Tell me how one relates to the other.
Palmer: The HP Oracle Database Machine is a single rack that contains everything you need to run a large data warehouse. It contains eight ProLiant servers running Oracle Database 11g and RAC. It has four InfiniBand network switches and it has 14 of these Oracle Exadata Storage Servers that we talked about earlier. So in a single unit you have everything you need, ready to load up your data and start running your business queries right away.
Gardner: Tell us a little bit, Rich, about this 42-slot rack configuration and why it’s right for the market now?
Palmer: Well, so if you look at the market in data warehousing, the appliance type of delivery is a much simpler deployment of hardware and software configurations. That is emerging as a high-growth area in data warehousing. So with this market trend that’s going on between HP and Oracle, we’ve been able to come together and put everything in customers’ needs in one box. We put it at the customer’s site, and that’s on a global basis.
If you look at HP, one of the strengths that HP brings to this relationship is our ability to distribute and deliver globally. We build all of these database servers or database machines in regions around the globe. They are not just built here in the United States; they are built in United States, they are built in Singapore, they are built in Scotland, and then they are delivered to those regions on a worldwide basis.
So this ability of HP to build the product from the ground up to an exact specification, deliver to the customer, install at to customer's site, and then have Oracle come in and tune the software to make sure it's optimally configured -- that is a no-lose environment. We have the ability here to deliver an appliance-like stack of hardware, put the right software set on that hardware, and target a customer's need for simplicity, high performance, and data reliability -- all in one box.
Gardner: Okay, we've described the marketplace need, the size of data pushing the envelope. Now we are re-architecting to adjust to that. We've described the subset, which is the Exadata Server, and then the configuration, which is the racked Machine. Now, what kind of organizations are going to be interested in having the forklift upgrade to this, bring it right in, drop it in, pre-configured, optimized, and what are they going to do with it? Is this for business intelligence (BI), is this for simply managing scale? What are the speeds that this now provides going to do for companies to improve, or to change, how they do business?
Hardie: The organizations that are going to be interested in Oracle Exadata Storage Server and the HP Oracle Database Machine are those primarily interested in large data warehouses. And by large data warehouses we're talking into the (terabytes and petabytes) and beyond. Now if you look at the organizations that are typically dependent on very large data warehouses, it's organizations that Rich mentioned earlier, the telcos could be an obvious one, call data records, retail organization, very much dependent on analyzing point of sales (POS) transactions. You look at other organizations like trading systems, massive amount of transactions flow through these systems on a daily basis.
Gardner: Especially these days.
Hardie: Absolutely. It is really important to understand what's going on with these transactions, and to make informed business decisions. The beauty of this is you have completely scalable infrastructure from a storage point of view. But more importantly, you've got completely scalable infrastructure from a query performance point of view. As you store more call data records into these systems, more POS transactions, more stock transactions into these systems, you're not going to deteriorate your query performance at all. The more hardware, the more storage servers you put into these systems, the better your performance is going to be.
Gardner: Now that I have this capability to bang on this thing, so to speak, in more ways without the degrading performance, in what ways do you expect these companies to actually "bang" on this? Is this going to provide new and higher level of business intelligence querying? Is this going to provide higher-order analytics? Are there going to more business applications that can derive near real-time data and analytics from this? All of the above? What's the qualitative payback?
Hardie: There is definitely an element of "all of the above." Let me give you some of the examples of some of the queries that customers have actually been experiencing using the Oracle Exadata Storage Server. This probably fits into the context pretty well. You have organizations out there, retail organizations, telcos, for example. You know, some of the queries they are running are literally running for over half an hour. In some cases it is hours.
Moving to this new architecture is bringing down these execution times. One particular example, a query that was running for over 30 minutes is now running in under 30 seconds. It's that scale of improvement. Now when you can set your terminal, your laptop, or your mobile device and then kick off a query and get an answer within seconds -- then you're going to do more of these. If you know that when you kick off a query it is going to take 30 seconds to return it, you're going to pick more times when you choose to kick that off. You don't have to worry timing that anymore. You can just ask queries when you like, and expect to get a quick answer.
Palmer: Willie, I think you are absolutely right. The ability to capture business information has accelerated so much because of this technology. There are customers that cannot access data records beyond a certain time period simply because of the massive size of those data records, or because of how long a query would take to access a historical group of data. That all goes away now.
Now you have the ability. Historically you might have been able to look at the last week's worth of retail records, or medical records. Now you have the ability to go and look at years and years of data in the same timeframe that you were looking at weeks of data, and query a much bigger dataset, because of this architecture. That's a big business value, because now I can trend my business in a much more effective way. I'm putting more productivity tools in the hands of the user, so that they can actually turn data queries and business intelligence back into a fundamental element of growing their business and being more competitive in their markets.
Gardner: I imagine this will also compel companies to put even more data and information into these warehouses, because they are not going to degrade the performance of these essential queries. They are also going to able to do more types of queries. And, again, we're improving the quality and breadth of the data types, but still getting even better performance. So it's sort of a qualitative improvement on many different dimensions.
Hardie: It's a qualitative improvement, and it's a quantitative. I mean, you're absolutely right. Organizations today are more and more dependent on faster access to better information. It's just as simple as that.
Gardner: We've talked about the types of organizations that we'll use this now in its current configuration. I expected this re-architecting of the database and the storage will also move down market a bit. What possible other use-case scenarios do you envision for leveraging this technology beyond the high-end of the market, into other areas of the market?
Palmer: If you look at some of the growing and emerging markets today, just think of cloud computing and all of the massive amounts of data that we're storing in other locations on the Internet, or through a paid service, and the massive amounts of storage that's being deployed for those types of applications. That's not going to slow down at all. This allows us through the Database Machine to go in and drop in a configured environment for that workload, specifically dedicated to a workload.
You can now scale this product by connecting multiple racks together, you can now scale just the storage component, if the processing side of the database environment is sufficient. You can now just scale the storage nodes, so it is a scalable grid architecture that can grow on the fly. So cloud computing is a very good example where we really don't know what the upper limit of that storage is going to be. So deploy a configuration, say, on a HP Oracle Database Machine and then grow it as your needs grow. This is one application where we know this is going to succeed.
Gardner: Willie, we're also aware that organizations will just want the Oracle Exadata Storage Server. They might have their own environments, their own preference for configuring what's available to them, and what would become available to them in the future.
Hardie: Any organization that wants to run their data warehouse on the Oracle Exadata Storage Server -- all they have to do is buy the Oracle Exadata Storage Server. It's just as simple as that. Oracle and HP of long given customers a choice of configurable options. So if customer feels that something like HP Oracle Database Machine is not the right fit for their organization, if it does not fit the standard needs for their organization, then they have the option of buying the individual components, the Oracle Exadata Storage Server, the InfiniBand connectors, connecting to the database servers, they have that option.
Gardner: Looking at this again through how to get started, where do organizations go? Now that this is available immediately, both of these configurations, is the sales happening through both HP and Oracle?
Palmer: It's a cooperative effort, but Oracle is leading the sales process. So the Oracle sales representatives on a global basis are leading this process, and HP is certainly as their partner going to join with them and make sure that the customer receives the best from both companies.
Gardner: HP is going to service the hardware, but the support comes through Oracle, is that correct?
Hardie: Oracle is the first point of contact if you want to buy an Oracle Exadata Storage Server, Oracle is your first point of contact. So talk to your local Oracle sales representatives. If you do decide to buy one, and you want to resolve a support issue, you call Oracle, and Oracle will bring in HP as and when required to resolve any issues.
Gardner: To sum up a little bit, for those folks who perhaps are a few steps removed from the IT department, who are doing queries, or using business applications, what's the big take away for them? What about this announcement is going to change their world?
Hardie: For these types of users you just mentioned, a little bit or a couple of steps removed from the IT department ... To be quite honest, they don't really care what their systems run on. What they are interested in is getting fast answers to their business queries. It's just simple as that. So when these business users know that they can get instantaneous response times, they can get real extreme performance of their date warehouse, or of their business intelligence applications -- that's what's going to make a big difference for them.
Gardner: Rich, at HP, let me flip the question to you. For those people inside the IT department, who want to come in Monday morning without big headaches, what is this new configuration and architectural approach mean for them?
Palmer: Simplicity, higher performance, the ability to increase their service level agreements (SLAs) with their customers in the warehousing world. This is a solution built on industry standard hardware, with Oracle software that is just well accepted in the industry as an enterprise software leader. The IT departments are very comfortable with both of those facts. They're very comfortable with HP; they're very comfortable with Oracle. Putting the two together is a natural event for any IT manager.
Gardner: We've been talking about a large and impactful announcement here at Oracle OpenWorld, the introduction of the Oracle Exadata Storage Server -- the first hardware product from Oracle. Isn't that right?
Hardie: Absolutely.
Gardner: We've also looked at the configuration of those Exadata servers into the HP Oracle Database Machine, which is in effect a data warehouse appliance. Joining us to help explain this, we have been happy to have Rich Palmer, director of technology and strategy in the industry standard servers group at HP. And also Willie Hardie, vice president of Oracle database product marketing. Thanks to you both.
Hardie: Thank you, Dana.
Palmer: Thank you very much, Dana.
Gardner: Our conversation comes to you today through a sponsored HP Live! Podcast from the Oracle OpenWorld Conference in San Francisco. Look for other podcasts from this HP Live! event series at hp.com, as well as via the BriefingsDirect Network. I'd like to thank our producers on today's show, Fred Bals and Kate Whalen.
I am Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise IT topics and strategies. Bye for now.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Oracle OpenWorld Conference in San Francisco. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
business intelligence,
data center,
data warehouse,
database,
Exadata,
Green IT,
HP,
Oracle,
technology
Subscribe to:
Posts (Atom)