Transcript of a BriefingsDirect podcast on the role of log management and analytics as enterprises move to cloud computing and software as a service.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to BriefingsDirect. Today, we present a sponsored podcast discussion on the changing nature of IT systems' performance and the heightening expectations for applications delivery from those accessing application as services.
The requirements and expectations on software-as-a-service (SaaS) providers are often higher than for applications traditionally delivered by enterprises for their employees and customers. Always knowing what's going on under the IT hood, being proactive in detection, security, and remediation, and keeping an absolute adherence to service level agreements (SLAs), are the tougher standards a SaaS provider deals with.
Increasingly, this expected level of visibility, management, and performance will apply to those serving up applications as services regardless of their hosting origins or models.
Here to provide the full story on how SaaS is making all applications' performance expectations higher, and how to meet or exceed those expectations is Jian Zhen, senior director of product management at LogLogic. Welcome to the show Jian.
Jian Zhen: Thank you for having me.
Gardner: We're also joined by Phil Wainewright, an independent analyst, director of Procullux Ventures, and SaaS blogger at ZDNet and ebizQ. Welcome back to the show, Phil.
Phil Wainewright: Glad to be here, Dana.
Gardner: Phil, let’s start with you. The state of affairs in IT is shifting. Services are becoming available from a variety of different models and hosts. We're certainly hearing a lot about cloud and private cloud. I suppose the first part of this that caught the public's attention was this whole SaaS notion and some successes in the field for that.
Maybe you could help us understand how the world has changed around SaaS infrastructure, and what implications that has for the IT department?
Wainewright: One thing that's happening is that the SaaS infrastructure is getting more complicated, because more choice is emerging. In the past people might have gone to one or two SaaS vendors in very isolated environments or isolated use cases. What we're now finding is that people are aggregating different SaaS services.
They're maybe using cloud resources alongside of SaaS. We're actually looking at different layers of not just SaaS, but also platform as a service (PaaS), which are customizable applications, rather than the more packaged applications that we saw in the first generation of SaaS. We're seeing more utility and cloud platforms and a whole range of options in between.
That means people are really using different resources and having to keep tabs on all those different resources. Where in the past, all of an IT organizations' resources were under their own control, they now have to operate in this more open environment, where trust and visibility as to what's going on are major factors.
Gardner: Do you think that the type of application delivery that folks are getting from the Web will start to become more the norm in terms of what delivery mechanisms they encounter inside the firewall from their own data center or architecture?
Wainewright: If you're going to take advantage of SaaS properly, then you need to move to more of a service-oriented architecture (SOA) internally. That makes it easier to start to aggregate or integrate these different mashups, these different services. At the end of the day, the end users aren't going to be bothered whether the application is delivered from the enhanced data center or from a third-party provider outside the firewall, as long as it works and gives them the business results they're looking for.
Gardner: Let's go to Jian Zhen at LogLogic. How does this changing landscape in IT and in services delivery affect those who are responsible for keeping the servers running, both from the host as well as the receiving end in the network, and those who are renting or leasing those applications as services?
Zhen: Phil hit the nail on the head earlier when he mentioned that IT not only has to keep track of resources within their own environment, but now has to worry about all these resources and applications outside of their environment that they may or may not have control over.
That really is one of the fundamental changes and key issues for current IT organizations. You have to worry not only about who is accessing the information within your company firewall, but now you have all this data that's sitting outside of the firewall in another environment. That could be a PaaS, as Phil said, it could be a SaaS, an application that's sitting out there. How do you control that access? How do you monitor that access. That's one of the key issues that IT has to worry about.
Obviously, there are data governance issues and activity monitoring issues. Now, from a performance and operational perspective, you have to worry about, are my systems performing, are these applications that I am renting, or platforms or utilities I am renting, are they performing to my spec? How do I ensure that the service providers can give me the SLAs that I need.
Those are some of the key issues that IT has to face when they are going outside of this corporate firewall.
Gardner: I suppose if it were just one application that you knew you were getting as a service, if something would go wrong, you might have a pretty good sense of who is responsible and where, but we are very rapidly advancing toward mixtures, hybrids, multiple SaaS providers, different services that come together to form processes. Some of these might be on premises, and some of them might not be.
It strikes me that we're entering a time when finger pointing might become rampant if something goes wrong, who is ultimately responsible, and under whose SLA does it fall?
Phil, from your perspective, how important will it be to gain risk, compliance, and security comfort, by being able to quickly identify who is the source of any issue?
Wainewright: That's vitally important, and this is a new responsibility for IT. To be honest Dana, you're a little bit generous to the SaaS providers when you say that if you only dealt with one or two, and if something went down, you had a fair idea of what was going on. What SaaS providers have been learning is that they need to get better at giving more information to their customers about what is going wrong when the service is not up or the service is not performing as expected. The SaaS industry is still learning about that. So, there is that element on that side.
On the IT side, the IT people have spent too much time worrying about reasons why they didn't want to deal with SaaS or cloud providers. They've been dealing with issues like what if does go down, or how can I trust the security? Yes, it does go down sometimes, but it's up 99.7 percent of the time or 99.9 percent of the time, which is better than most organizations can afford to do with their own services.
Let's shift the emphasis from, "It's broken, so I won't use it," to a more mature attitude, which says, "It will be up most of the time, but when it does break, how do I make sure that I remain accountable, as the IT manager, the IT Director, or the CIO. How do I remain accountable for those services to my organization, and how do I make sure that I can pinpoint the cause of the problem, and get it rectified as quickly as possible?"
Gardner: Jian, this offers a pretty significant opportunity, if you, as a vendor and a provider of services and solutions, can bring visibility and help quickly decide where the blame lies, but I suppose more importantly, where the remediation lies. How do you view that opportunity, and what specifically is LogLogic doing?
Zhen: We talked to a lot of customers who were either considering or actually going into the cloud or using SaaS applications. One of the great quotes that we recently got from a customer is, "You can outsource responsibility, but not accountability." So, it fits right into what Phil what was saying about being accountable and about your own environment.
The requirement to comply with government regulations and industry mandates really doesn't change all that much, just because of SaaS or because a company is going into the cloud. What it means is that the end users are still responsible for complying with Sarbanes-Oxley (SOX), payment cared industry (PCI) standards, the Health Insurance Portability and Accountability Act (HIPAA), and other regulations. It also means that these customers will also expect the same type of reports that they get out of their own systems.
IT organizations are used to transparency in their own environment. If they want to know what's happening in their own environment, they can get access to it. They can at least figure out what's going on. As you go into the cloud and use some of the SaaS applications, you start to lose some of that transparency, as you move up the stack. Phil mentioned earlier, there's infrastructure as a service, PaaS, SaaS. As you go up the stack, you're going to lose more and more of that transparency.
From a service-provider perspective, we need these providers to provide more transparency and more information as to what's happening in their environment and who has access. Who did access the information? LogLogic's can help these service providers get that kind of information and potentially even provide the reports for their end users.
From a user's perspective, there is that expectation. They want to know what's going on and who is accessing the data. So, the service providers need to have the proper controls and processes in place, and need to continuously monitor their own infrastructure, and then provide some of these additional reports and information to their end customers as needed.
Gardner: LogLogic is in the business of collating and standardizing information from a vast array of different systems through the log files and other information and then offering reports and audit capabilities from that data. It strikes me that you are now getting closer to what some people call business intelligence (BI) for IT, in that you need to deal almost in real time with vast amounts of data, and that you might need to adjust across boundaries in order to gain the insights and inference.
Do you at LogLogic cotton to this notion of BI for IT, and if so, what might we expect in the future from that?
Zhen: BI for IT or IT intelligence, as I have used the term before, is really about getting more information out of the IT infrastructure; whether it's internal IT infrastructure or external IT infrastructure, such as the cloud.
Traditionally, administrators have always used logs as one of the tools to help them analyze and understand the infrastructure, both from a security and operational perspective. For example, one of the recent reports from Price Waterhouse, I believe, says that the number one method for identifying security incidents and operational problems is through logs.
LogLogic's can provide the infrastructure and the tools to help customers gather the information and correlate different log sources. We can provide them that information, both from an internal and external perspective. We work with a lot of service providers, as you know, companies like SAVVIS, VeriSign, Verizon Business Services, to provide the tools for them to analyze service provider infrastructures as well.
A lot of that information can be gathered into a central location, correlated, and presented as business intelligence or business activity monitoring for the IT infrastructure.
Gardner: Phil, the amount of data that we can extract from these systems inside the service providers is vast. I suppose what people are looking for is the needle in the haystack. Also, as you mentioned, it probably behooves these providers to offer more insights into how well they did or didn't do.
What's your take on this notion of BI for IT, and does it offer the SaaS providers an opportunity to get a higher level of insight and detail about what is going on within their systems for the assurance and risk mediation for their customers?
Wainewright: Yes, it does. This is an area where we are going to see best practices emerge. We're in a very early stage. Talking about keeping logs reminds me of what happened in the early days of Web sites and Web analytics. When people started having Web sites, they used to create these log files, in which they accumulated all this data about the traffic coming to the site. Increasingly, it became more difficult to analyze that traffic and to get the pertinent information out.
Eventually, we saw the rise of specialist Web-traffic analytics vendors, most of them, incidentally, providing their services as SaaS focused on helping the Web-site managers understand what was going on with their traffic.
IT is going to have to do the same thing. Anyone can create a log file, dump all the data into a log, and say that they've got a record of what's been going on. But, that's the technically easy challenge. The difficult thing, as Jian said, is actually doing the business analytics and the BI to see what was going on, and to see what the information is.
Increasingly, it comes back to IT accountability. If your service provider does go down, and if the logs show that the performance was degrading gradually over a period of time, then you should have known that. You should have been doing the analysis over time, so that you were ahead of that curve and were able to challenge the provider before the system went down.
If it's a good provider, which comes back to the question you asked, then the provider should be on top of that before the customer finds out. Increasingly, we'll see the quality of reporting that providers are doing to customers go up dramatically. The best providers will understand that the more visibility and transparency they provide the customers about the quality of service they are delivering, the more confidence and trust their customers will have in that service.
Gardner: As we mentioned, the expectations are increasing. The folks who rent an application for a few dollars a month actually have higher expectations on performance than perhaps far more expensive applications inside a firewall and the traditional delivery mechanisms.
Wainewright: That's right, Dana. People get annoyed when Gmail goes down, and that's free. People do have these high expectations.
Gardner: Perhaps we can meet those expectations, even as they increase, but even more importantly for these providers is the cost at which they deliver their services. The utilization rates, the amount of energy that’s required per task or some metric like that, these log files, and this BI will decide their margins and how competitive they are in what we expect to be a fairly competitive field. In fact, we are starting to see the signs of marketplace and auctioning types of activities around who can put up a service for the least amount of money, which, of course, will put more downward pressure on margin.
I've got to go back to Jian on this one. We can certainly provide for user expectations and SLAs, but ultimately how well you run your data center as a service provider dictates your survival ability or viability as a business.
Zhen: You're absolutely right. One of the things that service providers, SaaS providers, or cloud providers have always talked about is the economy of scale. Essentially, that's doing more with less in order to understand your IT infrastructure and understand your customer base. This is what BI is all about, right? You're analyzing your business, your user base, the user access, and all that information in trying to come up with some competitive advantage to either reduce cost or increase efficiency.
All that information is in logs, whether logs that are spewed out by your IT infrastructure, logs that are instrumented using agents or application performance, monitoring type of tools. That information is there, and you need to be able to automate and enhance the ways things are done. So, you need to understand and see what's going on in the environment.
Analyzing all those logs gives you critical capability, not only managing hundreds or thousands of systems and making them more efficient, but bringing that BI throughout. Seeing how your users are accessing, reacting to, or changing your system makes it more efficient for the user, faster for the user, and, at the same time, reduces that cost to manage the infrastructure, as well as to do business.
So, the need to understand and see what's going on is really driving the need to have better tools to do system analysis.
Gardner: Well, how about that Phil? With apologies to Monty Python, every electron is important, right?
Wainewright: Well, it certainly can be. I think the other benefits of providers monitoring this information is that, if they can build out a track record and demonstrate that they all providing better service, then maybe that's the way of defending themselves, of being able to justify asking higher prices than they might otherwise have done.
If the pricing is going to go down because of competitive pressures, there will be differential pricing according to the quality that providers can show they have a track record for delivering.
Zhen: I definitely agree with that. Being able to provide better SLAs, being able to provide more transparency, audit transparency, are things that enterprises care about. As many reports have mentioned, it's one of the biggest issues that's preventing enterprises from adopting the cloud or some of these SaaS applications. Not that the enterprises are not adopting, but the movement is still very slow.
The main reasons are security and transparency. As SaaS providers or service providers start providing a lot more information based on the data that they analyze, they can provide better SLAs, both from an uptime and performance perspective, not just uptime. A lot of the SLAs today just talk about uptime. If they can provide a lot of that information by analyzing the information that they already have -- the log data, access data, and what not -- that’s a competitive advantage for the providers. They can charge a higher price, and often, enterprises are willing to pay for that.
Wainewright: I've been speaking to enterprise customers, and they are looking for better information from the providers about those performance metrics, because they want to know what the quality of service is. They want to know that they're getting value for money.
Gardner: Well, we seem to have quite a set of pressures. One, to uphold performance, provide visibility, reduce risk, and offer compliance and auditing benefits. On the other side, it's pure economics. The more insight and utilization you have, and the more efficiently you can run your data centers, the more you can increase your margin and scale out to offer yet more services to more types of customers. It seems pretty clear that there's a problem set and a solution set.
Jian, you mentioned that you had several large service providers as customers. I don’t suppose they want all the details about what happens inside their organizations to come out, but perhaps you have some use case scenarios. Do you have examples of how analytics from a system’s performance, vis-à-vis log data, helps them on either score, either qualitatively in terms of performance and trust, and more importantly, over time, their ability to reap the most efficiency out of their system?
Zhen: These are actually partners of LogLogic. We've worked with these service-provider partners to provide managed services or cloud services for log management to the end customers. They're using it both working with the customers themselves, as well as using it internally.
Often, the use cases are really around compliance and security. That’s where the budget is coming from. Compliance is the biggest driver for some of these tools today.
However, some of the reports I mentioned, especially from Enterprise Strategy Group (ESG), one of the fastest-growing use cases for log management is operational use. This means troubleshooting, forensic analysis, and being able to analyze what's going on in the environment. But, the biggest driver today for purchasing that type of log-management solution is still compliance -- being able to comply with SOX, PCI, HIPAA, and other regulations.
Gardner: Let’s wrap up with some crystal-ball gazing. First, from Phil. How do you see this market shaking out? I know we're under more economic pressure these days, given the pending or imminent global recession, but it seems to me that it could be a transformative pressure, a catalyst, toward more adoption of services, and keeping application performance at lowest possible cost. What's your sense of where the market is going.
Wainewright: It’s a terrible cliché, but it’s about doing more with less. It may be a cliché, but it’s what people are trying to do. They've got to cut costs as organizations, and, at the same time, they have to actually be more agile, more flexible, and more competitive.
That means a lot of IT organizations are looking to SaaS and they're looking to cloud computing, because this is the way of getting resources without a massive outlay and starting to do things with a relatively low risk of failure.
They're finding that budgets are tight. They need to get things done quickly. Cloud or SaaS allows them to do that, and therefore there's a rosy future, even in bleak economic conditions, for this type of offering.
There are still a lot of worries among IT people as to the reliability and security and privacy compliance and all the other factors around SaaS. Therefore, the SaaS providers have to make sure that they're monitoring that, and that they're reporting. Likewise, the IT people, for their own peace of mind, need to make their own arrangement, so that they can also be keeping an eye on their side. I think everyone is going to be tracking and monitoring each other.
The upside of is that we're going to get more enterprise-class performance and enterprise-class infrastructure being built around the cloud services and the SaaS providers, so that enterprises will be able to have more confidence. So, at the end of the economic cycle, once people start investing again, I think we'll see people continue to invest in cloud services and SaaS, not because it's the low-cost option, but because it's the proven option that they have confidence in.
Gardner: Jian Zhen, how do you and LogLogic see the market unfolding? Where do you think the opportunities lie?
Zhen: I definitely agree with Phil. With the current economic environment, a lot of enterprises will start looking at SaaS and cloud services seriously and consider them.
However, enterprises are still required to be compliant with government regulations and industry mandate, so that's not going to go away. For the service providers and the SaaS providers, what they can do to attract these customers really is to make themselves more attractive, and make themselves be compliant with some of these regulations, and provide more transparency, giving people a view into who is accessing the data, and how they protect the data.
Amazon did a great thing, which was to release a white paper on some of their security practices. It's a very high level, but it’s a good start. Service providers need to start thinking more along the lines of, how to attract these enterprise customers, because the enterprise customers are willing and seriously considering SaaS services.
Phil had an article a while back, calling for a SaaS code of conduct. Phil, one of the things that you should definitely add there is a code to have the service providers provide all the transparency. That’s a thing that service providers can use to offer essentially a competitive advantage for their enterprise customers.
Gardner: Now, you sit at a fairly advantageous point, or a catbird's seat, if you will, on this regulatory issue. As enterprises seek more SaaS and cloud services for economic and perhaps longer-term strategic reasons, do we need to rethink some of our compliance and regulatory approaches?
We have a transition in the United States in terms of the government. So, now is a good time, I suppose, to look at those sorts of things. What, from your perspective, should change in order to allow companies to more freely embrace and use cloud and SaaS services, when it comes to regulation and compliance?
Zhen: As far as changing the regulations, I'm not sure there are a lot of things. We've seen SOX become a very high level and very costly regulation to be compliant with. However, we've also have seen PCI. That’s much more specific, and companies and even service providers can adopt and use some of these requirements.
Gardner: That's the payment card issue, right?
Zhen: Correct. The PCI data-security standard is a lot more specific as to what a company has to do in order to be compliant with it. Actually, one of the appendixes is really for service providers. A lot of service providers have used, for example, the Statement on Auditing Standards (SAS) 70 Type II kind of a report as one of the things they show the customer that they are compliant with. However, I don’t think the SAS 70 Type II is sufficient, mainly because the controls are described by the service providers themselves.
Essentially, they set their own requirements and they say, "Hey, we meet these requirements." I don’t think that’s sufficient. It needs to be something that’s more industry standard, like PCI, but maybe a little bit different, definitely more specific as to what the service providers needs to do.
On top of that, we need some kind of information on when security incidents happen with service providers. One of the things that 44 states have today is data-breach notification laws. That law obviously doesn’t apply to SaaS providers, but in order to provide more transparency there may need to be some standard or some processes in how breaches are reported and handled.
Some of these things certainly will help enterprises be more comfortable in adopting the services.
Gardner: Well, there are some topics Phil for about 150 blog entries, this whole notion of how to shift regulation and compliance in order to suit a cloud economy.
Wainewright: Yeah, it's going to be a difficult issue for the cloud providers to adapt to, but a very important one. This whole issue of SAS 70 Type II compliance, for example. If you're relying on a service provider for part of the services that you provide, then your SAS 70 Type II needs to dovetail with their SAS 70 Type II processes.
That’s the kind of issue that Jian was alluding to. It's no good just having SAS 70 Type II, if the processes that you've got are somehow in conflict with or don't work in collaboration with the service providers that you are depending on. We have to get a lot smarter within the industry about how we coordinate services and provide accountability and audit visibility and trackability between the different service providers.
Gardner: Very good. We've been discussing requirements and expectations around SaaS providers, looking at expected increases and demands for visibility, and management and performance metrics. Helping us to better understand these topics -- and I'm very happy that they joined us -- are Jian Zhen, senior director of product management at LogLogic. Thanks for your input, Jian.
Zhen: Thank you, Dana.
Gardner: Also Phil Wainewright, independent analyst, director of Procullux Ventures, and SaaS blogger at ZDNet and ebizQ. Always good to have you here Phil, thank you.
Wainewright: Thanks, Dana.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've have been listening to a sponsored BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.
Transcript of a BriefingsDirect podcast on the role of log management and analytics as enterprises move to cloud computing and SaaS. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Showing posts with label loglogic. Show all posts
Showing posts with label loglogic. Show all posts
Monday, December 15, 2008
Thursday, November 06, 2008
Implementing ITIL Requires Log Management and Analytics to Help IT Operations Gain Efficiency and Accountability
Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on how to run your IT department well by implementing proven standards and methods, and particularly leveraging the Information Technology Infrastructure Library (ITIL) prescriptions and guidelines.
We’ll talk with an expert on ITIL and why it’s making sense for more IT departments and operations around the world. We’ll also look into ways that IT leaders can gain visibility into systems and operations to produce the audit and performance data trail that helps implement and refine such frameworks as ITIL.
We’ll examine the use of systems log management and analytics in the context of ITIL and of managing IT operations with an eye to process efficiency, operational accountability, and systems behaviors, in the sense of knowing a lot about the trains, in order to help keep them running on time and at the lowest possible cost.
To help us understand these trends and findings we are joined by Sudha Iyer. She is the director of product management at LogLogic. Welcome to the show, Sudha.
Sudha Iyer: Thank you.
Gardner: We’re also joined by Sean McClean. He is a principal at KatalystNow in Orlando, Florida. It's a firm that handles mentoring, learning, and training around ITIL and tools used to implement ITIL. Welcome to the show, Sean.
Sean McCLean: Thank you very much.
Gardner: Let's start by looking at ITIL in general for those folks who might not be familiar with it. Sean, how are people actually using it and implementing it nowadays?
McCLean: ITIL has a long and interesting history. It's a series of concepts that have been around since the 1980, although lot of people will dispute exactly when it got started and how. Essentially, it started with the Central Computer and Telecommunications Agency (CCTA) of the British government.
What they were looking to do was create a set of frameworks that could be followed for IT. Throughout ITIL's history, it has been driven by a couple of key concepts. If you look at almost any other business or industry, accounting for example, it’s been around for years. There are certain common practices and principles that everyone agrees upon.
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT.
In recent times, there has been a lot more focus on that, particularly in two general areas. One, ITIL has had multiple revisions. Initially, it was a drive to handle support and delivery. Now, we are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business. That has kind of been the constant focus of ITIL.
Gardner: So, it's really about maturity of IT as a function that becomes more akin to other major business types of functions or management functions.
McCLean: Absolutely. I think it's interesting, because anyone in the IT field needs to remember that we are in a really exciting time and place. Number one, because technology revises itself on what seems like a daily basis. Number two, because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things.
As I said, in areas like accounting or sales, those things are consistent. They stay that way for eons, but this one is a new and changing environment for us.
Gardner: Are there any particular stumbling blocks that organizations have as they decide to implement ITIL? When you are doing training and mentoring, what are the speed bumps in their adoption pattern?
McCLean: A couple of pieces are always a little confusing when people look at ITIL. Organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way. As with any kind of framework, it’s there to provide guidance and an overall common thread or a common language. But, the practicality of taking that common thread or common language and then incorporating it or interpreting it in your business is sometimes hard to get your head around.
It's interesting that we have the same kind of confusion when we just talk. I could say the word “chair,” and the picture in your head of what a chair is and the picture in my head of what a chair is are slightly different.
It's the same when we talk about adopting a framework such as ITIL that's fairly broad. When you apply it within the business, things like “that business is governance,” “that business is auditing compliance rules” and things like that have to be considered and interpreted within that framework for ITIL. A lot of times, people who are trying to adopt ITIL struggle with that.
If we are a healthcare industry, we understand that we are talking about incidents or we understand that we are talking about the problems. We understand they we are talking about certain things that are identified in the ITIL framework, but we have to align ourselves with rules within the Health Insurance Portability and Accountability Act (HIPAA). Or, if we are an accounting organization, we have to comply to a different set of rules. So it's that element that's interesting.
Gardner: Now, what's interesting to me about the relationship between ITIL and log and systems analytics is that ITIL is really coming from the top-down, and it’s organizational and methodological in nature, but you need information, you need hard data to understand what's going on and how things are working and operating and how to improve. That's where the log analytics comes in from the bottom-up.
Let's go to Sudha. Tell us how a company like LogLogic uses ITIL, and how these two come together -- the top-down and the bottom-up?
Iyer: Sure. That's actually where the rubber meets the road, so to speak. As we have already discussed, ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.
But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.
There is a tight fit there in what we provide as our log-management platform. LogLogic has been around for a number of years and is the leader in this log management industry. It allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.
All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in.
Gardner: Back to you, Sean. When companies are trying to move beyond baseline implementation and really start getting some economic benefits, which of course are quite important these days from their ITIL activities, what sort of tools have you seen companies using? To what degree do you need to dovetail your methodological and ITIL activities with the proper tools down in the actual systems?
McCLean: When you’re starting to talk about applying the actual process to the tools, that's the space that's the most interesting to me. It's that element you need some common thread that you can pull through all of those.
Today, in the industry, we have countless different tools that we use, and we need common threads that can pull across all of those different tools and say, “Well, these things are consistent and these things will apply as we move forward into these processes.” As Sudha pointed out, having an underlying log system is a great way to get that started.
The common thread in many cases across those pieces is maintaining the focus on the business. That's always where IT needs to be more conscious and to be constantly driving forward. Ultimately, where do these tools fit to follow business, and how did these tools provide the services that ultimately support the business to do the thing that we are trying to get done?
Does that address the question?
Gardner: I think so. Sudha, tell us about some instances where LogLogic has been used and ITIL has been the focus or the context of its use. Are there some findings general use case findings? What have been some of the outcomes when these two bottom-up, top-down approaches come together?
Iyer: That's a great question. The bottom line is the customers, and we have a very large customer base. It turns out, according to some surveys we have done in our customer base, that the biggest driver for a framework such as ITIL is compliance. The importance of ITIL for compliance has been recognized, and that is the biggest impact.
As Sean mentioned earlier, it's not a package that you buy and plug into your network and there you go, you are compliant. It's a continues process.
What some of our customers have figured out is that adopting our log management solutions allows them to create better control and visibility into what actually is going on on their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on, whether you were able to mitigate anything untoward that's happening, and whether there is accountability. So, we get feedback in our surveys that control, and visibility has been the top driver for implementing such solutions.
Another item that Sean touched on, reducing IT cost and improving the service quality, was the other driver. When they look at a log-management console and see this is how many admin accesses that were denied. It happened between 10 p.m. and midnight. They quickly alert, get on the job. and try to mitigate the risk. This is where they have seen the biggest value return on investment (ROI) on implementations of LogLogic.
Gardner: Sean, the most recent version of ITIL, Version 3 focuses, as you were alluding to, on IT service management, of IT behaving like a service bureau, where it is responsible on almost a market forces basis to their users, their constituents, in the enterprise. This involves increasingly service-level agreements (SLAs) and contracts, either explicit or implicit.
At the same time, it seems as if we’re engaging with the higher level of complexity in our data center's increased use of virtualization and the increased use of software-as-a-service (SaaS) type services.
What's the tension here between the need to provide services with high expectations and a contract agreement and, at the same time, this built-in complexity? Is there a role for tools like LogLogic to come into play there?
McCLean: Absolutely. There is a great opportunity with regard to tools such as LogLogic from that direction. ITIL Version 2 focused on simply support and delivery, those two key areas. We are going to support the IT services and we are going to deliver along the lines of these services.
The ITIL Version 2 has started to talk a lot about alignment of IT with the business, because a lot of times IT continues and drives and does things without necessarily realizing what the business is and the business is doing. An IT department focuses on email, but they are not necessarily looking at the fact that email is supporting whatever it is the business is trying to accomplish or how that service does.
As we moved into ITIL Version 3, they started trying to go beyond simply saying it's an element of alignment and move the concept of IT into an area where its a part of the business. Therefore it’s offering services within and outside of the business.
One of the key elements in the new manuals in ITIL V3 is talk to service strategy, and its a hot topic amongst the ITIL community, this push towards a strategic look at IT, and developing services as if you were your own business.
IT is looking and saying, “Well, we need to develop our IT services as a service that we would sell to the business, just as any other organization would.” With that in mind, it's all driving toward how we can turn our assets into strategic assets? If we have a service and its made up of an Exchange server, or we have a service and it’s made up three virtual machines, what can we do with those things to make them even more valuable to the business?
If I have an Exchange server, is there someway that I can parcel it out or farm it to do something else that will also be valuable?
Now, with LogLogic's suite of tools we’re able to pull that log information about those assets. That's when you start being able to investigate how you can make the assets that exist more value driven for the organization's business.
Gardner: Back to you, Sudha. Have you had customer engagements where you have seen that this notion of being a contract service provider puts a great deal of responsibility on them, that they need greater insight and, as Sean was saying, need to find even more ways to exploit their resources, provide higher level services, and increase utilization, even as complexity increases?
Iyer: I was just going to add to what Sean was describing. You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized. Or, it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time, etc. That's where the open API, the open platform that LogLogic is comes into play.
Today, IT is heavily into the services-oriented architecture (SOA) methodology. So, we say, “Do you have to actually have a console login to understand what's going on in your enterprise?” No. You are probably a storage administrator or located in a very different location than the data center where a LogLogic solution is deployed, but you still want to analyze and predict how the storage capacity is going to be used over the next six months or a year.
The open API, the open LogLogic platform, is a great way for these other entities in an organization to leverage the LogLogic solution in place.
Gardner: Another thing that has impressed me with ITIL over the years is that it allows for sharing of information on best practices, not only inside of a single enterprise but across multiple ones and even across industries and wide global geographies.
In order to better learn from the industries' hard lessons or mistakes, you need to be able to share across common denominators, whether its APIs, measurements, or standards. I wonder if the community-based aspect to log behaviors, system behaviors, and sharing them also plays into that larger ITIL method of general industry best practices. Any thoughts along those line, Sean?
McCLean: It's really interesting that you hit on that piece, because globalization is one of the biggest drivers I think for getting ITIL moving and going on. More and more businesses have started reaching outside of the national borders, whether we call them offshore resources, outshore resources, or however you want to refer to them.
As we become more global, businesses are looking to leverage other areas. The more you do that, the larger you grow your business in trying to make it global, the more critical it is that you have a common ground.
Back to that illustration of the chair, when we communicate and we think we are talking about the same thing, we need some common point, and without it we can't really go forward at all. ITIL becomes more and more valuable the more and more we see this push towards globalization.
It’s the same with a common thread or shared log information for the same purposes. The more you can share that information and bring it across in a consistent manner, then the better you can start leveraging it. The more we are all talking about the same thing or the same chair, when we are referring to something, the better we can leverage it, share information, and start to generate new ideas around it.
Gardner: Sudha, anything to add to that in terms of community and the fact that many of these systems are outputting the same logs. I’s making that information available on a proper context that becomes the value add.
Iyer: That's right. Let's say you are Organization A and you have vendor relationships and customer relationships outside your enterprise. So, you’ve got federated services. You’ve got different kinds of applications that you share between these two different constituents -- vendors and customers.
You probably already have an SLA with these entities, and you want to make sure you are delivering on these operations. You will want to make sure there is enough uptime. You want to grow towards a common future where your technologies are not far behind, and sharing this information and making sure that what you have today is very critical. That's where there is actual value.
Gardner: Let's get into some examples. I know it's difficult to get companies to talk about sensitive systems in their IT practices. So perhaps we could keep it at the level of use-case scenarios.
Let's go to Sean first. Do you have any examples of companies that have taken ITIL to the level of implementation with tools like log analytics, and do you have some anecdotes or metrics of what some of the experiences have been?
McCLean: I wish I had metrics. Metrics is the one thing that seems to be very hard to come up with in this area. I can think of a couple of instances where organizations were rolling out ITIL implementations. In implementations where I am engaged, specifically in mentoring, one of the things I try to get them to do is to dial into the community and talk to other people who are also implementing the same types of processes and practices.
There’s one particular organization out in the Dallas-Fort Worth, Texas area. When they started getting into the community, even though they were using different tools, the underlying principles that they were trying to get to were the same.
In that case they were able to start sharing information across two companies in a manner that was saying, “We do these same things with regard to handling incidents or problems and share information, regardless of the tool being set up.”
Now, in that case I don't have specific examples of them using LogLogic, but what invariably came out in this set of discussions was what we need underneath is the ability to get proactive and start preventing these incidents before they happen. Then, we need metrics and some kind of reporting system where we can start doing the checking issues before they occur and getting the team on board to fix it before it happen. That's where they started getting into log-like tools and looking at using log data for that purpose.
Iyer: That corroborates with one of the surveys we developed and conducted in the last quarter. Organizations reported that the biggest challenge for implementing ITIL was twofold.
The first was the process of implementation, the skill set that they needed. They wanted to make sure there was a baseline, and measuring the quality of improvement was the biggest impediment.
The second one was the result of this process improvement. You get your implementation of the ITIL process itself, and where did you get it? Where were you before and where did you end up after the implementation?
I guess when you were asking for metrics, you were looking for those concrete numbers, and that's been a challenge, because you need to know what you need to measure, but you don't know that because you are not skilled enough in the ITIL practices. Then, you learn from the community, from the best-of-breed case studies on the Web sites and so forth, and you go your merry way, and then the baseline numbers for the very first time get collected from the log tools.
Gardner: I imagine that it's much better to get early and rapid insights from the systems than to wait for the SLAs to be broken, for user surveys to come back, and say, “We really don't think the IT department is carrying its weight.” Or, even worse, to get outside customers or partners coming back with complaints about performance or other issues. It really is about early insights and getting intervention that seems to really dovetail well with what ITIL is all about.
McCLean: I absolutely agree with that. Early on in my career within ITIL I had a debate with a practitioner on the other side of the pond. One thing we had a debate about was about SLAs. I had indicated that it's critical to get the business engaged in the SLA immediately.
His first answer was no, it doesn't have to happen that way. I was flabbergasted. You provide a service to an organization without an SLA first? I thought “This can't be. This doesn't make sense. You have to get the business involved.”
When we talked through it and got down to real cases, it turned out that what he was saying is that it’s not that he didn't feel that the SLA didn’t need to be negotiated with the business. What he meant was that we need to get data and reports about the services that we are delivering before we go to the customer, the customer, in this case, being internal.
His point was that we need to get data and information about the service we are delivering, so that when we have the discussion with a business about the service levels we provide, they have a baseline to offer. I think that's to Sudha's point as well.
Iyer: That's right. Actually, it goes back to one of the opening discussions we had here about aligning IT to the business goals. ITIL helps organizations make the business owners think about what they need. They do not assume that the IT services are going to be there or its not an afterthought. It’s a part of that collective, working toward the common success.
Gardner: Let's wrap up our discussion with some predictions or look into the future of ITIL. Sean, do you have any sense of where the next directions for ITIL will be, and how important is it for enterprises that might not be involved with it now to get involved, so that they can be in a better position to take advantage of the next chapters?
McCLean: The last is the most critical. People who are not engaged or involved in ITIL yet will find they are starting to drop out of a common language. That enables you to do just about everything else you do with regard to IT in your business.
If you don't speak the language and the vendors that provide the services do, then you have a hard time getting the vendors to understand what it is the vendors are offering. If you don't speak the language and you are trying to get information shared, then you have a hard time getting forward in that sense.
It’s absolutely critical for businesses and enterprises to start understanding the need for adopting. I don't want to paint it as if everybody needs to get on board ITIL, but you need to get into that and aware of that, so that you can help drive its future directions.
As you pointed out earlier, Dana, it's a common framework but it's also commonly contributed to. It's very much an open framework, so if a new way to do things comes up and is shared, that makes sense. That would be probably the next thing that's adopted. It’s just like our English language, where new terms and phrases are developed all the time. It's very important for people to get on board.
In terms of what's the next big front, when you have this broad framework like this that says, “Here are common practices, best practices, and IT practices.” If the industry matures, I think we will see a lot of steps in the near future, where people are looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”
There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Gardner: Sudha, it certainly seems like a very auspicious pairing, the values that LogLogic provides and the type of organizations that would be embracing ITIL. Do you see ITIL as an important go-to market or a channel for you, and is there in fact a natural pairing between ITIL-minded organizations and some of the value that you provide?
Iyer: Actually, LogLogic believes that ITIL is one of those strong frameworks that IT organizations should be adopting. To that effect, we have been delivering ITIL-related reporting, since we first launched the Compliance Suite. It has been an important component of our support for the IT organization to improve their productivity.
In today’s climate, it's very hard to predict how the IT spending will be affected. The more we can do to get visibility into their existing infrastructure networks and so on, the better off it is for the customer and for ourselves as a company.
Gardner: We’ve been discussing how enterprises have been embracing ITIL and improving the way that they produce services for their users. We’ve been learning more about visibility and the role that log analytics and systems information plays in that process.
Helping us have been our panelists, Sudha Iyer. She is the director of product management at LogLogic. Thanks very much, Sudha.
Iyer: Thank you, it's a pleasure, to be sure.
Gardner: Sean McClean, principal at KatalystNow, which mentors and helps organizations train and prepare for ITIL and its benefits. It’s based in Orlando, Florida. Thanks very much, Sean.
McCLean: Thank you. It’s been a pleasure.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion on how to run your IT department well by implementing proven standards and methods, and particularly leveraging the Information Technology Infrastructure Library (ITIL) prescriptions and guidelines.
We’ll talk with an expert on ITIL and why it’s making sense for more IT departments and operations around the world. We’ll also look into ways that IT leaders can gain visibility into systems and operations to produce the audit and performance data trail that helps implement and refine such frameworks as ITIL.
We’ll examine the use of systems log management and analytics in the context of ITIL and of managing IT operations with an eye to process efficiency, operational accountability, and systems behaviors, in the sense of knowing a lot about the trains, in order to help keep them running on time and at the lowest possible cost.
To help us understand these trends and findings we are joined by Sudha Iyer. She is the director of product management at LogLogic. Welcome to the show, Sudha.
Sudha Iyer: Thank you.
Gardner: We’re also joined by Sean McClean. He is a principal at KatalystNow in Orlando, Florida. It's a firm that handles mentoring, learning, and training around ITIL and tools used to implement ITIL. Welcome to the show, Sean.
Sean McCLean: Thank you very much.
Gardner: Let's start by looking at ITIL in general for those folks who might not be familiar with it. Sean, how are people actually using it and implementing it nowadays?
McCLean: ITIL has a long and interesting history. It's a series of concepts that have been around since the 1980, although lot of people will dispute exactly when it got started and how. Essentially, it started with the Central Computer and Telecommunications Agency (CCTA) of the British government.
What they were looking to do was create a set of frameworks that could be followed for IT. Throughout ITIL's history, it has been driven by a couple of key concepts. If you look at almost any other business or industry, accounting for example, it’s been around for years. There are certain common practices and principles that everyone agrees upon.
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT.
In recent times, there has been a lot more focus on that, particularly in two general areas. One, ITIL has had multiple revisions. Initially, it was a drive to handle support and delivery. Now, we are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business. That has kind of been the constant focus of ITIL.
Gardner: So, it's really about maturity of IT as a function that becomes more akin to other major business types of functions or management functions.
McCLean: Absolutely. I think it's interesting, because anyone in the IT field needs to remember that we are in a really exciting time and place. Number one, because technology revises itself on what seems like a daily basis. Number two, because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things.
As I said, in areas like accounting or sales, those things are consistent. They stay that way for eons, but this one is a new and changing environment for us.
Gardner: Are there any particular stumbling blocks that organizations have as they decide to implement ITIL? When you are doing training and mentoring, what are the speed bumps in their adoption pattern?
McCLean: A couple of pieces are always a little confusing when people look at ITIL. Organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way. As with any kind of framework, it’s there to provide guidance and an overall common thread or a common language. But, the practicality of taking that common thread or common language and then incorporating it or interpreting it in your business is sometimes hard to get your head around.
It's interesting that we have the same kind of confusion when we just talk. I could say the word “chair,” and the picture in your head of what a chair is and the picture in my head of what a chair is are slightly different.
It's the same when we talk about adopting a framework such as ITIL that's fairly broad. When you apply it within the business, things like “that business is governance,” “that business is auditing compliance rules” and things like that have to be considered and interpreted within that framework for ITIL. A lot of times, people who are trying to adopt ITIL struggle with that.
If we are a healthcare industry, we understand that we are talking about incidents or we understand that we are talking about the problems. We understand they we are talking about certain things that are identified in the ITIL framework, but we have to align ourselves with rules within the Health Insurance Portability and Accountability Act (HIPAA). Or, if we are an accounting organization, we have to comply to a different set of rules. So it's that element that's interesting.
Gardner: Now, what's interesting to me about the relationship between ITIL and log and systems analytics is that ITIL is really coming from the top-down, and it’s organizational and methodological in nature, but you need information, you need hard data to understand what's going on and how things are working and operating and how to improve. That's where the log analytics comes in from the bottom-up.
Let's go to Sudha. Tell us how a company like LogLogic uses ITIL, and how these two come together -- the top-down and the bottom-up?
Iyer: Sure. That's actually where the rubber meets the road, so to speak. As we have already discussed, ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.
But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.
There is a tight fit there in what we provide as our log-management platform. LogLogic has been around for a number of years and is the leader in this log management industry. It allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.
All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in.
Gardner: Back to you, Sean. When companies are trying to move beyond baseline implementation and really start getting some economic benefits, which of course are quite important these days from their ITIL activities, what sort of tools have you seen companies using? To what degree do you need to dovetail your methodological and ITIL activities with the proper tools down in the actual systems?
McCLean: When you’re starting to talk about applying the actual process to the tools, that's the space that's the most interesting to me. It's that element you need some common thread that you can pull through all of those.
Today, in the industry, we have countless different tools that we use, and we need common threads that can pull across all of those different tools and say, “Well, these things are consistent and these things will apply as we move forward into these processes.” As Sudha pointed out, having an underlying log system is a great way to get that started.
The common thread in many cases across those pieces is maintaining the focus on the business. That's always where IT needs to be more conscious and to be constantly driving forward. Ultimately, where do these tools fit to follow business, and how did these tools provide the services that ultimately support the business to do the thing that we are trying to get done?
Does that address the question?
Gardner: I think so. Sudha, tell us about some instances where LogLogic has been used and ITIL has been the focus or the context of its use. Are there some findings general use case findings? What have been some of the outcomes when these two bottom-up, top-down approaches come together?
Iyer: That's a great question. The bottom line is the customers, and we have a very large customer base. It turns out, according to some surveys we have done in our customer base, that the biggest driver for a framework such as ITIL is compliance. The importance of ITIL for compliance has been recognized, and that is the biggest impact.
As Sean mentioned earlier, it's not a package that you buy and plug into your network and there you go, you are compliant. It's a continues process.
What some of our customers have figured out is that adopting our log management solutions allows them to create better control and visibility into what actually is going on on their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on, whether you were able to mitigate anything untoward that's happening, and whether there is accountability. So, we get feedback in our surveys that control, and visibility has been the top driver for implementing such solutions.
Another item that Sean touched on, reducing IT cost and improving the service quality, was the other driver. When they look at a log-management console and see this is how many admin accesses that were denied. It happened between 10 p.m. and midnight. They quickly alert, get on the job. and try to mitigate the risk. This is where they have seen the biggest value return on investment (ROI) on implementations of LogLogic.
Gardner: Sean, the most recent version of ITIL, Version 3 focuses, as you were alluding to, on IT service management, of IT behaving like a service bureau, where it is responsible on almost a market forces basis to their users, their constituents, in the enterprise. This involves increasingly service-level agreements (SLAs) and contracts, either explicit or implicit.
At the same time, it seems as if we’re engaging with the higher level of complexity in our data center's increased use of virtualization and the increased use of software-as-a-service (SaaS) type services.
What's the tension here between the need to provide services with high expectations and a contract agreement and, at the same time, this built-in complexity? Is there a role for tools like LogLogic to come into play there?
McCLean: Absolutely. There is a great opportunity with regard to tools such as LogLogic from that direction. ITIL Version 2 focused on simply support and delivery, those two key areas. We are going to support the IT services and we are going to deliver along the lines of these services.
The ITIL Version 2 has started to talk a lot about alignment of IT with the business, because a lot of times IT continues and drives and does things without necessarily realizing what the business is and the business is doing. An IT department focuses on email, but they are not necessarily looking at the fact that email is supporting whatever it is the business is trying to accomplish or how that service does.
As we moved into ITIL Version 3, they started trying to go beyond simply saying it's an element of alignment and move the concept of IT into an area where its a part of the business. Therefore it’s offering services within and outside of the business.
One of the key elements in the new manuals in ITIL V3 is talk to service strategy, and its a hot topic amongst the ITIL community, this push towards a strategic look at IT, and developing services as if you were your own business.
IT is looking and saying, “Well, we need to develop our IT services as a service that we would sell to the business, just as any other organization would.” With that in mind, it's all driving toward how we can turn our assets into strategic assets? If we have a service and its made up of an Exchange server, or we have a service and it’s made up three virtual machines, what can we do with those things to make them even more valuable to the business?
If I have an Exchange server, is there someway that I can parcel it out or farm it to do something else that will also be valuable?
Now, with LogLogic's suite of tools we’re able to pull that log information about those assets. That's when you start being able to investigate how you can make the assets that exist more value driven for the organization's business.
Gardner: Back to you, Sudha. Have you had customer engagements where you have seen that this notion of being a contract service provider puts a great deal of responsibility on them, that they need greater insight and, as Sean was saying, need to find even more ways to exploit their resources, provide higher level services, and increase utilization, even as complexity increases?
Iyer: I was just going to add to what Sean was describing. You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized. Or, it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time, etc. That's where the open API, the open platform that LogLogic is comes into play.
Today, IT is heavily into the services-oriented architecture (SOA) methodology. So, we say, “Do you have to actually have a console login to understand what's going on in your enterprise?” No. You are probably a storage administrator or located in a very different location than the data center where a LogLogic solution is deployed, but you still want to analyze and predict how the storage capacity is going to be used over the next six months or a year.
The open API, the open LogLogic platform, is a great way for these other entities in an organization to leverage the LogLogic solution in place.
Gardner: Another thing that has impressed me with ITIL over the years is that it allows for sharing of information on best practices, not only inside of a single enterprise but across multiple ones and even across industries and wide global geographies.
In order to better learn from the industries' hard lessons or mistakes, you need to be able to share across common denominators, whether its APIs, measurements, or standards. I wonder if the community-based aspect to log behaviors, system behaviors, and sharing them also plays into that larger ITIL method of general industry best practices. Any thoughts along those line, Sean?
McCLean: It's really interesting that you hit on that piece, because globalization is one of the biggest drivers I think for getting ITIL moving and going on. More and more businesses have started reaching outside of the national borders, whether we call them offshore resources, outshore resources, or however you want to refer to them.
As we become more global, businesses are looking to leverage other areas. The more you do that, the larger you grow your business in trying to make it global, the more critical it is that you have a common ground.
Back to that illustration of the chair, when we communicate and we think we are talking about the same thing, we need some common point, and without it we can't really go forward at all. ITIL becomes more and more valuable the more and more we see this push towards globalization.
It’s the same with a common thread or shared log information for the same purposes. The more you can share that information and bring it across in a consistent manner, then the better you can start leveraging it. The more we are all talking about the same thing or the same chair, when we are referring to something, the better we can leverage it, share information, and start to generate new ideas around it.
Gardner: Sudha, anything to add to that in terms of community and the fact that many of these systems are outputting the same logs. I’s making that information available on a proper context that becomes the value add.
Iyer: That's right. Let's say you are Organization A and you have vendor relationships and customer relationships outside your enterprise. So, you’ve got federated services. You’ve got different kinds of applications that you share between these two different constituents -- vendors and customers.
You probably already have an SLA with these entities, and you want to make sure you are delivering on these operations. You will want to make sure there is enough uptime. You want to grow towards a common future where your technologies are not far behind, and sharing this information and making sure that what you have today is very critical. That's where there is actual value.
Gardner: Let's get into some examples. I know it's difficult to get companies to talk about sensitive systems in their IT practices. So perhaps we could keep it at the level of use-case scenarios.
Let's go to Sean first. Do you have any examples of companies that have taken ITIL to the level of implementation with tools like log analytics, and do you have some anecdotes or metrics of what some of the experiences have been?
McCLean: I wish I had metrics. Metrics is the one thing that seems to be very hard to come up with in this area. I can think of a couple of instances where organizations were rolling out ITIL implementations. In implementations where I am engaged, specifically in mentoring, one of the things I try to get them to do is to dial into the community and talk to other people who are also implementing the same types of processes and practices.
There’s one particular organization out in the Dallas-Fort Worth, Texas area. When they started getting into the community, even though they were using different tools, the underlying principles that they were trying to get to were the same.
In that case they were able to start sharing information across two companies in a manner that was saying, “We do these same things with regard to handling incidents or problems and share information, regardless of the tool being set up.”
Now, in that case I don't have specific examples of them using LogLogic, but what invariably came out in this set of discussions was what we need underneath is the ability to get proactive and start preventing these incidents before they happen. Then, we need metrics and some kind of reporting system where we can start doing the checking issues before they occur and getting the team on board to fix it before it happen. That's where they started getting into log-like tools and looking at using log data for that purpose.
Iyer: That corroborates with one of the surveys we developed and conducted in the last quarter. Organizations reported that the biggest challenge for implementing ITIL was twofold.
The first was the process of implementation, the skill set that they needed. They wanted to make sure there was a baseline, and measuring the quality of improvement was the biggest impediment.
The second one was the result of this process improvement. You get your implementation of the ITIL process itself, and where did you get it? Where were you before and where did you end up after the implementation?
I guess when you were asking for metrics, you were looking for those concrete numbers, and that's been a challenge, because you need to know what you need to measure, but you don't know that because you are not skilled enough in the ITIL practices. Then, you learn from the community, from the best-of-breed case studies on the Web sites and so forth, and you go your merry way, and then the baseline numbers for the very first time get collected from the log tools.
Gardner: I imagine that it's much better to get early and rapid insights from the systems than to wait for the SLAs to be broken, for user surveys to come back, and say, “We really don't think the IT department is carrying its weight.” Or, even worse, to get outside customers or partners coming back with complaints about performance or other issues. It really is about early insights and getting intervention that seems to really dovetail well with what ITIL is all about.
McCLean: I absolutely agree with that. Early on in my career within ITIL I had a debate with a practitioner on the other side of the pond. One thing we had a debate about was about SLAs. I had indicated that it's critical to get the business engaged in the SLA immediately.
His first answer was no, it doesn't have to happen that way. I was flabbergasted. You provide a service to an organization without an SLA first? I thought “This can't be. This doesn't make sense. You have to get the business involved.”
When we talked through it and got down to real cases, it turned out that what he was saying is that it’s not that he didn't feel that the SLA didn’t need to be negotiated with the business. What he meant was that we need to get data and reports about the services that we are delivering before we go to the customer, the customer, in this case, being internal.
His point was that we need to get data and information about the service we are delivering, so that when we have the discussion with a business about the service levels we provide, they have a baseline to offer. I think that's to Sudha's point as well.
Iyer: That's right. Actually, it goes back to one of the opening discussions we had here about aligning IT to the business goals. ITIL helps organizations make the business owners think about what they need. They do not assume that the IT services are going to be there or its not an afterthought. It’s a part of that collective, working toward the common success.
Gardner: Let's wrap up our discussion with some predictions or look into the future of ITIL. Sean, do you have any sense of where the next directions for ITIL will be, and how important is it for enterprises that might not be involved with it now to get involved, so that they can be in a better position to take advantage of the next chapters?
McCLean: The last is the most critical. People who are not engaged or involved in ITIL yet will find they are starting to drop out of a common language. That enables you to do just about everything else you do with regard to IT in your business.
If you don't speak the language and the vendors that provide the services do, then you have a hard time getting the vendors to understand what it is the vendors are offering. If you don't speak the language and you are trying to get information shared, then you have a hard time getting forward in that sense.
It’s absolutely critical for businesses and enterprises to start understanding the need for adopting. I don't want to paint it as if everybody needs to get on board ITIL, but you need to get into that and aware of that, so that you can help drive its future directions.
As you pointed out earlier, Dana, it's a common framework but it's also commonly contributed to. It's very much an open framework, so if a new way to do things comes up and is shared, that makes sense. That would be probably the next thing that's adopted. It’s just like our English language, where new terms and phrases are developed all the time. It's very important for people to get on board.
In terms of what's the next big front, when you have this broad framework like this that says, “Here are common practices, best practices, and IT practices.” If the industry matures, I think we will see a lot of steps in the near future, where people are looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”
There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Gardner: Sudha, it certainly seems like a very auspicious pairing, the values that LogLogic provides and the type of organizations that would be embracing ITIL. Do you see ITIL as an important go-to market or a channel for you, and is there in fact a natural pairing between ITIL-minded organizations and some of the value that you provide?
Iyer: Actually, LogLogic believes that ITIL is one of those strong frameworks that IT organizations should be adopting. To that effect, we have been delivering ITIL-related reporting, since we first launched the Compliance Suite. It has been an important component of our support for the IT organization to improve their productivity.
In today’s climate, it's very hard to predict how the IT spending will be affected. The more we can do to get visibility into their existing infrastructure networks and so on, the better off it is for the customer and for ourselves as a company.
Gardner: We’ve been discussing how enterprises have been embracing ITIL and improving the way that they produce services for their users. We’ve been learning more about visibility and the role that log analytics and systems information plays in that process.
Helping us have been our panelists, Sudha Iyer. She is the director of product management at LogLogic. Thanks very much, Sudha.
Iyer: Thank you, it's a pleasure, to be sure.
Gardner: Sean McClean, principal at KatalystNow, which mentors and helps organizations train and prepare for ITIL and its benefits. It’s based in Orlando, Florida. Thanks very much, Sean.
McCLean: Thank you. It’s been a pleasure.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the role of log management and systems analytics within the Information Technology Infrastructure Library (ITIL) framework. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
Dana Gardner,
Interarbor,
ITIL,
Iyer,
log management,
loglogic,
Sean McClean
Tuesday, September 30, 2008
Improved Insights and Analysis From Systems Logs Reduce Complexity Risks From Virtualization
Transcript of BriefingsDirect podcast on the infrastructure management and security challenges of virtualization.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about virtualization, and how to better improve management of virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation -- but without the pitfalls of complexity and mismanagement.
We're going to be talking about virtualization best practices with several folks who are dealing with this at several different levels. We're going to hearing from VMware, Unisys and LogLogic.
Let me introduce our panel today. First, we're joined by Charu Chaubal, senior architect for technical marketing, at VMware. Welcome, Charu.
Charu Chaubal: Thank you.
Gardner: We're also joined by Chris Hoff, chief security architect at Unisys. Hi, Chris.
Chris Hoff: Hi, how are you?
Gardner: Great. Also, Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic. Welcome to the show.
Dr. Anton Chuvakin: Hello. Thank you.
Gardner: Virtualization has certainly taken off, and this is nothing new to VMware. Organizations like Unisys are now doing quite a bit to help organizations that utilize, expand, and enjoy the benefits of virtualization. But virtualization needs to be done the correct way, without avoid pitfalls. If you do it too tactically, without allowing it to be part of an IT lifecycle and without management, then the fruits and benefits of virtualization can be largely lost.
Before we get into what virtualization can do, what to avoid, and how to better approach it, I'd like to just take a moment and try to determine why virtualization is really hot and taking off in the market now.
Let's start with Chris Hoff at Unisys. Some of these technologies have been around for many years. What is it about this point in time that is really making virtualization so hot?
Hoff: It's the confluence of quite a few things, and we see this sort of event happen in information technology (IT) quite often. You have the practically perfect storm of economics, technology, culture, and business coming together at one really interesting point in time.
The first thing that comes to mind is when people think about the benefits. The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.
These things are contributing a lot to the uptake, not to mention the advent of a lot of new technology in both hardware and software, which is starting to enable some of this to be more realistic in a business environment.
Gardner: Now over to VMware. Charu, tell us how deep and wide virtualization is emerging? It seems like people are using it in more and more ways, and in more and more places.
Chaubal: That's right. When the x86 virtualization first started out, maybe 10 years ago in a big way, it was largely being used in test and development types of environments. Over the last five years, it's definitely started to enter the production arena as well. We see more and more customers running even mission-critical applications on virtualization technologies.
Furthermore, we also see it across the board in terms of customer size, where everyone from the smallest customer to the very largest enterprises, are expanding further and further with their virtual environments.
Gardner: Let's go to LogLogic. Tell me, Anton, what sort of security and what sort of preventative measures are you helping your customers with in terms of gaining the visibility and the analytics about what's going on among these many moving parts? Many of these deployments are in now in an automated mode, more so than before they were virtualized. What are some of the issues that are you helping people deal with?
Chuvakin: You were exactly right about the visibility into the environments. As people deploy different types of IT infrastructure, first physical and now virtual, there is always a challenge of figuring out what happens with those PCs, at those PCs, which people are trying to connect to, or even attack them, and do all these at the same time around the clock.
Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.
As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.
Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.
In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.
Gardner: Now, Chris at Unisys, when you're dealing with customers, based on what we've heard about this expansion of virtualization, you're dealing with it on an applications level, and also on the infrastructure and server level.
What’s more, some folks are now getting into desktop virtualization infrastructure and delivering whole desktop interfaces out to end-user devices. This impacts not just a server. We're talking about network devices and storage devices. This is a bit more than a tactical issue. It really starts getting strategic pretty quickly.
Hoff: That's absolutely correct. If you really look at virtualization as an enabling technology or platform, as we can look out to the next three years of large companies use from the perspective of their strategic plans, you'll notice that there is a large trend toward what you might call "real-time infrastructure."
The notion here is about how you apply and take this enabling technology in the benefits of virtualization and leverage that to provide automation re-purposing. You have to deal with elements and issues that relate to charge-back for assets, as IT becomes more of a utility service.
If we look further out from there, we look at the governance issues of what it means to not really focus on hardware anymore, or even applications -- but on service and service levels. It gets a lot more strategic at times, played out all along the continuum.
While we focus virtualization on the notion of infrastructure and technology, what's really starting to happen now -- and what's important with the customers that we deal with -- is being able to unite both business process and business strategy, along with the infrastructure and the architecture that support it.
So we're a little excited and frothed up as it relates to all the benefits of virtualization today, and the bigger picture is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.
Gardner: Now, Charu at VMware, you obviously are building out what you consider the premier platform and approach to virtualization technically. You've heard, obviously, the opportunity for professional services and methodologies for approaching this, and you have third parties like LogLogic that are trying to provide better visibility across many different systems and devices.
How are you using this information in terms of what you bring to the management table for folks who are moving from, say, tactical to more strategic use of virtualization?
Chaubal: A lot of customers are expanding their virtualization so much now, to the point where they're hitting some interesting challenges that they maybe wouldn't have hit before. One great example is around compliance, such as Payment Card Industry Data Security Standards (PCI) compliance. There are a lot of questions right now around virtualizing those systems that process credit card holder data.
Chaubal: They're asking, "If I do this, am I going to be compliant with PCI? Is this something that's a realistic possibility? If it is, how do I go about demonstrating this to an auditor?"
This is where partners like LogLogic come into play, because they have the tools that can help achieve this. We believe that VMware provides a compliance-ready type of platform, so it is something you can achieve compliance with. But, in order to demonstrate and maintain that compliance, it's useful to have these tools from partners that can help you do that.
Gardner: Now, Anton at LogLogic, you're able to examine a number of different systems, gather information, correlate that information, do analytics, and provide a picture of what should be happening. Or, when something is not happening, you can look for the reasons why and look for aberrant or unusual behavior. So let's address security a little bit.
What are some of the challenges in terms of security when you move from a physical environment for compute power and resources to a virtualized environment? Then second, what about the mixture? It is obviously going to be both physical and virtualized instances of infrastructure and applications. Tell us about the security implications.
Chuvakin: I just follow the same logic I used for our recent webcast about virtualization security. In this webcast, I basically presented a full view of things that are the same and that are different in virtualized environments. I'll use the same structure, because some people who get too frothy, as Greg put it, about virtualization just stick to "virtualization changes everything." That is used sometimes as an excuse to not do things that you should continue doing in a virtualized environment.
Let's start from what things are the same. When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.
They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.
Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.
The definition of "who" didn't change. It's still a user, but what and where definitely did change. I mean, if it was done on a certain server, in virtual environment it might not be a server -- it might be a virtual image, which adds additional complexities
There are also new things that just don't have any occurrence in the physical environment -- for example, a rogue VM, a VM that is built by somebody who is not authorized to run VMs. It might be the end user who actually has his own little mini infrastructure. It brings up all sorts of forensic challenges that you have now solved. You don't just investigate a machine. You investigate a machine with a virtual platform, with another server on top, or another desktop on top.
This is my view of things that are the same that you should continue doing and things that are new that you should start learning how to audit and how to analyze the activity in the virtual environments, as well as how to do forensics, if what you have is a machine with potential a rogue VM.
Gardner: How about you, Chris at Unisys, how do you view implications for security and risk mitigation when it comes to moving increasingly into virtualized environments?
Hoff: I have to take a pretty pragmatic approach. The reality is that there are three conversations and three separate questions that need to be addressed, when you're talking about security in virtualized environments.
Unfortunately, what usually happens is that all three of them are combined into one giant question, which tends to lead to more confusion. So I like to separate the virtualization and security questions into three parts.
One of them is securing virtualization, and understanding what the impacts are on your architecture, your infrastructure, and your business process and models, when you introduce this new virtualization layer. That's really about securing the underlying virtualization platforms and understanding what happens and what changes when you introduce that, assuming that you have a decent understanding of what that means, and how that will ultimately flow down operationally.
The second point or question to address is one of virtualizing security, which is actually the operational element of, "What does it mean, and how do I go about taking what I might do in the physical world, and replicate that and/or even improve it in the virtual world?"
That's an interesting question, assuming that you have a good understanding of architecture and things that matter most to you, and how you might protect them, or how you might not be doing that. You may find several gaps today in your ability to actually do what you do in the physical world.
The third element is security through virtualization, which is okay, assuming that I have a good architectural blueprint and that I understand the impacts, the models, who and what changes operationally, how I have to go about securing things, and what benefits I get out of virtualization.
How do I actually improve my security posture by using these platforms and this technology? If you look at that, if you look at it in that way, you really are able to start dealing with the issues associated with each category. You could probably guess that if you mixed all three of them up, you could go down one path, and very easily be distracted by another.
When we break out the conversations with customers like that, it always comes back to a very basic premise that we seem to have forgotten in our industry. Despite all the technology, despite all the tools, and all of the things that go blinky-blink at night, the reality is that this comes down to being able to appropriately manage risk. That starts with understanding the things that matter to you most and using risk assessment frameworks and processes.
In a gross analogy, when you go to a grocery store and you take time to pack your frozen goods in one bag, and your canned goods and your soft goods in other bags, you use this compartmentalization, understanding what the impact is of all of the wonderful mobility, balanced with compliance and security needs.
If you got home, and you've got canned goods in with your fruit, the reality is that you've not done a good job of compartmentalizing and understanding what the impact of one good might have on the other.
The same thing applies in the virtual world. If you don't take the time to go back to the basics, understanding the impact of the infrastructure and the changes -- you're going to be a world of hurt later, even if you get the cost benefits and all the wonderful agility and mobility.
We really approach it pragmatically in a rational manner, such that people understand both the pluses, the pros and the cons of virtualization in their environments.
Gardner: We've determined that virtualization is quite hot. It's ramping up quickly. A number of studies have shown a 50-70 percent increase in the use of virtualization in the last few years. Projections continue for very fast-paced growth.
We also see a number of organizations using multiple vendors, when it comes to virtualization. We've also discussed how security and complexity apply to this, and that you need a comprehensive or contextual view of what's going on with your systems -- particularly if you have a mixture of physical and virtual.
Let's look at some examples of how this has been mitigated, how the risk has actually been decreased, and how the fruits, if you will, of virtualization are enjoyed without the pitfalls.
Let's first go to Charu at VMware. Can you offer some examples of how people have used virtualization, done it the right away, avoided some of these pitfalls, and have gained the visibility and analytics and therefore helped with their matured approach to virtualization?
Chaubal: One thing we've done at VMware over the last year and a half is try to provide as much prescriptive guidance as we can. So a lot of securing of virtualization comes down to making sure you actually deploy it [properly].
So, one thing that we've done is created hardening guides that really aim to show customers how this can be done. That's proved to be very popular among our customers.
Not to get into too much detail, but one of the main issues is the fact that you have a virtualization layer that typically has a management interface in it. Then, you have the interface that goes into your virtual machines. People need to understand that this management layer needs to be completely separated from the actual production network.
That principle is manifested in different recommendations and scenarios, when you plan a deployment and configure it. That's just one example where customers have been able to make use of our prospective guidance. Then, they architect something that is actually much more secure than possibly they would have with some preconceived notions that they might have had. I think that's one area where we are seeing success.
Gardner: Let's go to LogLogic. Anton, give us some examples, actual companies or at least use-case scenarios, where the use of LogLogic, or the methodologies that it supports, have brought to bear on virtualization – to lower the cost, increased performance, gain higher utilization, and so forth -- but without some of these risks.
Chuvakin: I'll give an example of a retail company that was using LogLogic for compliance, as well for operational usage, such as troubleshooting their servers. This company, in a separate project, was implementing virtualization to convert some of their infrastructure to a virtual machine.
At some point, those two projects mainly had their log management to track operations to satisfy PCI requirements. These issues collided with the virtualization projects, and the company realized that they now have to not just collect logs from the physical infrastructure, but also from the virtual side that is now being built.
What happened was that the logs from the virtual infrastructure were also streamed into LogLogic. Now, LogLogic has the ability to collect any type of a log. In this case, we did use that capability to collect the log, which were at the time not even supported or analyzed by LogLogic.
The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."
We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.
In this case, it was actually a great success story because we used part of the LogLogic infrastructure that doesn't rely on any preconceived notions of what the logs are. Then, they built up on top of that to help them pinpoint the issues with their VMs to see who accesses the platforms, what applications people use to manage the environment, and, basically, to track all sorts of interest in events in their virtual infrastructure.
I have to admit that it wasn't really tested on their PCI yet, but I'm pretty confident that their PCI auditors will accept what they did for the virtual environment. And, they would satisfy the requirements of PCI, which calls for logging and monitoring, as well as the requirements in the compliance mandate.
At the same time, while they are building it for that use, their analysts are already trying to do searches and look certain things that might be out of order in their VM environment. An operational use-case spontaneously emerged, and now they not only have their own idea for what to look for, but also our content to do that.
Gardner: You bring up a point here that we shouldn't overlook. This isn't something that you just build and walk away from. It requires ongoing refinement tuning. The dynamic nature of virtualization, while perhaps automated in terms of allocating resources, is an overall process that needs to be managed in order for these business outcomes to be enjoyed.
Let's go back to Chris at Unisys. Tell us about the ongoing nature of virtualization. How do you keep on top of it? How do you keep it performing well, and perhaps even eke out more optimized utilization benefits?
Hoff: There's not a whole lot of difference in terms of how you might apply the same query to non-virtualized infrastructure. It's not a monolithic single-time event, but, as I alluded to in a previous answer, the next extension should be evolution along the continuum. That notion of real-time infrastructure really does take in the concept of a lot of tasks.
Today, we are quite operationally inefficient in doing that, both from the perspective of practice and infrastructure utilization, and really making sure that our infrastructure, and the compute and storage, and all of the things that go into, up in our infrastructure become much more efficient, for power, cost efficiency, utility, and flexibility.
When you unite all of those capabilities, what it's going to mean going forward is a much more rich methodology and model for taking business process and instantiating that as an expression of policy within your infrastructure. So, you can say the things that are most important to your business are these processes, and these services.
What you need to be able to do, and ultimately what it means to automation and the efficiency problems, is that the infrastructure needs to self-govern, self-provision and re-provision. You need to be to able to allocate cost back to your constituents, and it gets closer and closer to becoming a loose, but federated, group of services. It can essentially play and interact in real-time to service the needs of the business.
All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.
It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Gardner: In the way that you were describing this continuum, it almost sounds like you were alluding to cloud computing, as it's being defined more and more -- and perhaps the “private cloud,” where people would be managing their internal enterprise IT resources from a cloud perspective. Am I overstating it?
Hoff: No, I don't think you're overstating it. I think that's a reasonable assertion and assumption based on what I am saying. The difficulty in using the "cloud" word is that it means a lot of things to lots of people. I think you brought up three definitions in your one sentence.
But the notion of being able to essentially utilize our resources pretty much anywhere, regardless of who owns the infrastructure, is something that's enticing and brings up a host of wonderful issues that make security people like me itchy.
If you read Nicolas Carr's book The Big Switch, and you think about utility or grid computing or whatever you want to call it -- the notion of being able to better utilize my resources, balance that with security, and be very agile -- it's fun times ahead. You are absolutely right. I was alluding to the C-word, yes.
Gardner: Okay. Charu at VMware, given that organizations are at different rates of adoption around virtualization -- some are just starting to test the waters -- but the end goal for some of these adopters could be this cloud-compute value, this fabric of IT value.
How are people getting started, and how should they get started in a way that sets them up for this longer-term payoff?
Chaubal: That's a very broad question, but I think it is important that you can go in and use virtualization to consolidate physical servers on to smaller number of physical servers, and you get that savings that way. If that's the approach you take, you might end up at a dead-end, or you might get off on a tangent somewhere.
What we find is that there is really a maturity curve when it comes to virtualization adoption, and one of the most important axes along that curve is, in a broad sense, your operational maturity.
When you are starting out, sure, go ahead and consolidate servers. That's a good way to get some quick wins, but you're rapidly going to come to a point where you need to start to imposing an operational discipline and policies and procedures that perhaps you didn't have before.
Perhaps you had them, but they weren't all that rigidly adhered to or weren't really followed all the time. The most important thing is that you start thinking about this operational maturity, and then go to things like being able to standardize upon processes and standardize upon the way things are configured.
Any kind of process you do, make sure it goes through the right steps in terms of getting it approved. There is a whole methodology around that, and that's one of the things that we spend a lot of time with our customers.
We have this graph where, if you can look at how many servers are virtualized over time, we would like to see a steady upward 45-degree angle to that curve. If somebody virtualizes too many too soon, you will see that curve shoot up sharply. Then, you repeat yourself, because you virtualized so much so quickly, and all these other issues that Chris alluded to come into play, and they might bog you down.
On the other hand, you could suffer the other extreme where you virtualize so slowly, that the curve is very shallow, and you end up leaving savings and benefits on the table, because you are just picking them up so slowly.
Gardner: Missed opportunities, right?
Chaubal: Right, exactly. The most important thing, when you are starting out, is to keep that in mind that you are not just installing a piece of software that will optimize what you have already. It's really a fundamental transformation in how you do things.
Gardner: Okay, let's take the last question to Anton at LogLogic. How do you recommend people get started, particularly in reaching this balance between wanting not to miss opportunities, wanting to be able to ramp up quickly and to enjoy the benefits that virtualization provide, but doing it in such a way that they get that visibility and analytics, and can set themselves up to be risk resistant, but also strategic in their outlook?
Chuvakin: I'll use the case that I just presented to illustrate the way to do it. As has happened with me in technology before virtualization, people will sometimes deploy it in a manner that's really makes auditing and monitoring pretty hard. So they have to go back and figure out what the technologies are doing in terms of transparency and visibility.
I suggest that, as people deploy VMware and other virtualization platforms, they instantly connect those to their log-management tools, and that log collection starts day one.
Admittedly, most of those organizations would not know what to do with those logs, but having those logs as a first step will be important. Even if you don't know how to analyze the log, you don't know what they mean, or what they're trying to tell you, you still have that repository to fall back to.
If you have to investigate an issue, an incident, or an operational issue in an environment, you still have an ability to go back and say, "Oh, something of that sort already happened to me once. Let's see what else occurred at the same time."
Even if you have no skills to delve into the full scope of how to analyze all these signals that virtual infrastructure is sending us, I would focus first on selecting the data and having the data for analysis. When you do that, your future steps or your further steps, when you make sense of the data, will be much more easy, much more transparent, and much more doable overall.
You will have to learn what the signals are, what information is being emitted by your virtual infrastructure, and then make conclusions on that. But, to even analyze the information, to make conclusions, and to figure out what's going on, you have to have the original data.
It's easier to collect the data early, because it's really not a big deal. You just send those logs to LogLogic or the log management system, and they are capable of doing that right away. Now, admittedly, you have to pick a system, such as LogLogic, that can support your virtualization infrastructure and then you can build up your analysis and your understanding and build up your true visibility, sort of the next layer of the intelligence as you go. Don't try to use the analysis right away, but start collecting it day one.
Gardner: Right, visibility early and often. I appreciate your input. We have been talking about virtualization -- how to do it right, how to enjoy lower risk, understanding security implications, but at the same time moving aggressively as you can, because they are significant economic benefits.
Helping us understand virtualization in this context, we have been joined by Charu Chaubal, senior architect in technical marketing at VMware. Thank you, sir.
Chaubal: Thank you.
Gardner: Also Chris Hoff, chief security analyst at Unisys. I really appreciate your input, Chris.
Hoff: Thanks, very much.
Gardner: And also, Dr. Anton Chuvakin, chief logging evangelist and also a security expert at LogLogic. Thank you, sir.
Chuvakin: Thank you so much for inviting me.
Gardner: I would like to thank our sponsor for this podcast, LogLogic. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the management and security challenges of virtualization. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about virtualization, and how to better improve management of virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation -- but without the pitfalls of complexity and mismanagement.
We're going to be talking about virtualization best practices with several folks who are dealing with this at several different levels. We're going to hearing from VMware, Unisys and LogLogic.
Let me introduce our panel today. First, we're joined by Charu Chaubal, senior architect for technical marketing, at VMware. Welcome, Charu.
Charu Chaubal: Thank you.
Gardner: We're also joined by Chris Hoff, chief security architect at Unisys. Hi, Chris.
Chris Hoff: Hi, how are you?
Gardner: Great. Also, Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic. Welcome to the show.
Dr. Anton Chuvakin: Hello. Thank you.
Gardner: Virtualization has certainly taken off, and this is nothing new to VMware. Organizations like Unisys are now doing quite a bit to help organizations that utilize, expand, and enjoy the benefits of virtualization. But virtualization needs to be done the correct way, without avoid pitfalls. If you do it too tactically, without allowing it to be part of an IT lifecycle and without management, then the fruits and benefits of virtualization can be largely lost.
Before we get into what virtualization can do, what to avoid, and how to better approach it, I'd like to just take a moment and try to determine why virtualization is really hot and taking off in the market now.
Let's start with Chris Hoff at Unisys. Some of these technologies have been around for many years. What is it about this point in time that is really making virtualization so hot?
Hoff: It's the confluence of quite a few things, and we see this sort of event happen in information technology (IT) quite often. You have the practically perfect storm of economics, technology, culture, and business coming together at one really interesting point in time.
The first thing that comes to mind is when people think about the benefits. The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.
These things are contributing a lot to the uptake, not to mention the advent of a lot of new technology in both hardware and software, which is starting to enable some of this to be more realistic in a business environment.
Gardner: Now over to VMware. Charu, tell us how deep and wide virtualization is emerging? It seems like people are using it in more and more ways, and in more and more places.
Chaubal: That's right. When the x86 virtualization first started out, maybe 10 years ago in a big way, it was largely being used in test and development types of environments. Over the last five years, it's definitely started to enter the production arena as well. We see more and more customers running even mission-critical applications on virtualization technologies.
Furthermore, we also see it across the board in terms of customer size, where everyone from the smallest customer to the very largest enterprises, are expanding further and further with their virtual environments.
Gardner: Let's go to LogLogic. Tell me, Anton, what sort of security and what sort of preventative measures are you helping your customers with in terms of gaining the visibility and the analytics about what's going on among these many moving parts? Many of these deployments are in now in an automated mode, more so than before they were virtualized. What are some of the issues that are you helping people deal with?
Chuvakin: You were exactly right about the visibility into the environments. As people deploy different types of IT infrastructure, first physical and now virtual, there is always a challenge of figuring out what happens with those PCs, at those PCs, which people are trying to connect to, or even attack them, and do all these at the same time around the clock.
Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.
As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.
Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.
In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.
Gardner: Now, Chris at Unisys, when you're dealing with customers, based on what we've heard about this expansion of virtualization, you're dealing with it on an applications level, and also on the infrastructure and server level.
What’s more, some folks are now getting into desktop virtualization infrastructure and delivering whole desktop interfaces out to end-user devices. This impacts not just a server. We're talking about network devices and storage devices. This is a bit more than a tactical issue. It really starts getting strategic pretty quickly.
Hoff: That's absolutely correct. If you really look at virtualization as an enabling technology or platform, as we can look out to the next three years of large companies use from the perspective of their strategic plans, you'll notice that there is a large trend toward what you might call "real-time infrastructure."
The notion here is about how you apply and take this enabling technology in the benefits of virtualization and leverage that to provide automation re-purposing. You have to deal with elements and issues that relate to charge-back for assets, as IT becomes more of a utility service.
If we look further out from there, we look at the governance issues of what it means to not really focus on hardware anymore, or even applications -- but on service and service levels. It gets a lot more strategic at times, played out all along the continuum.
While we focus virtualization on the notion of infrastructure and technology, what's really starting to happen now -- and what's important with the customers that we deal with -- is being able to unite both business process and business strategy, along with the infrastructure and the architecture that support it.
So we're a little excited and frothed up as it relates to all the benefits of virtualization today, and the bigger picture is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.
Gardner: Now, Charu at VMware, you obviously are building out what you consider the premier platform and approach to virtualization technically. You've heard, obviously, the opportunity for professional services and methodologies for approaching this, and you have third parties like LogLogic that are trying to provide better visibility across many different systems and devices.
How are you using this information in terms of what you bring to the management table for folks who are moving from, say, tactical to more strategic use of virtualization?
Chaubal: A lot of customers are expanding their virtualization so much now, to the point where they're hitting some interesting challenges that they maybe wouldn't have hit before. One great example is around compliance, such as Payment Card Industry Data Security Standards (PCI) compliance. There are a lot of questions right now around virtualizing those systems that process credit card holder data.
Chaubal: They're asking, "If I do this, am I going to be compliant with PCI? Is this something that's a realistic possibility? If it is, how do I go about demonstrating this to an auditor?"
This is where partners like LogLogic come into play, because they have the tools that can help achieve this. We believe that VMware provides a compliance-ready type of platform, so it is something you can achieve compliance with. But, in order to demonstrate and maintain that compliance, it's useful to have these tools from partners that can help you do that.
Gardner: Now, Anton at LogLogic, you're able to examine a number of different systems, gather information, correlate that information, do analytics, and provide a picture of what should be happening. Or, when something is not happening, you can look for the reasons why and look for aberrant or unusual behavior. So let's address security a little bit.
What are some of the challenges in terms of security when you move from a physical environment for compute power and resources to a virtualized environment? Then second, what about the mixture? It is obviously going to be both physical and virtualized instances of infrastructure and applications. Tell us about the security implications.
Chuvakin: I just follow the same logic I used for our recent webcast about virtualization security. In this webcast, I basically presented a full view of things that are the same and that are different in virtualized environments. I'll use the same structure, because some people who get too frothy, as Greg put it, about virtualization just stick to "virtualization changes everything." That is used sometimes as an excuse to not do things that you should continue doing in a virtualized environment.
Let's start from what things are the same. When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.
They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.
Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.
The definition of "who" didn't change. It's still a user, but what and where definitely did change. I mean, if it was done on a certain server, in virtual environment it might not be a server -- it might be a virtual image, which adds additional complexities
There are also new things that just don't have any occurrence in the physical environment -- for example, a rogue VM, a VM that is built by somebody who is not authorized to run VMs. It might be the end user who actually has his own little mini infrastructure. It brings up all sorts of forensic challenges that you have now solved. You don't just investigate a machine. You investigate a machine with a virtual platform, with another server on top, or another desktop on top.
This is my view of things that are the same that you should continue doing and things that are new that you should start learning how to audit and how to analyze the activity in the virtual environments, as well as how to do forensics, if what you have is a machine with potential a rogue VM.
Gardner: How about you, Chris at Unisys, how do you view implications for security and risk mitigation when it comes to moving increasingly into virtualized environments?
Hoff: I have to take a pretty pragmatic approach. The reality is that there are three conversations and three separate questions that need to be addressed, when you're talking about security in virtualized environments.
Unfortunately, what usually happens is that all three of them are combined into one giant question, which tends to lead to more confusion. So I like to separate the virtualization and security questions into three parts.
One of them is securing virtualization, and understanding what the impacts are on your architecture, your infrastructure, and your business process and models, when you introduce this new virtualization layer. That's really about securing the underlying virtualization platforms and understanding what happens and what changes when you introduce that, assuming that you have a decent understanding of what that means, and how that will ultimately flow down operationally.
The second point or question to address is one of virtualizing security, which is actually the operational element of, "What does it mean, and how do I go about taking what I might do in the physical world, and replicate that and/or even improve it in the virtual world?"
That's an interesting question, assuming that you have a good understanding of architecture and things that matter most to you, and how you might protect them, or how you might not be doing that. You may find several gaps today in your ability to actually do what you do in the physical world.
The third element is security through virtualization, which is okay, assuming that I have a good architectural blueprint and that I understand the impacts, the models, who and what changes operationally, how I have to go about securing things, and what benefits I get out of virtualization.
How do I actually improve my security posture by using these platforms and this technology? If you look at that, if you look at it in that way, you really are able to start dealing with the issues associated with each category. You could probably guess that if you mixed all three of them up, you could go down one path, and very easily be distracted by another.
When we break out the conversations with customers like that, it always comes back to a very basic premise that we seem to have forgotten in our industry. Despite all the technology, despite all the tools, and all of the things that go blinky-blink at night, the reality is that this comes down to being able to appropriately manage risk. That starts with understanding the things that matter to you most and using risk assessment frameworks and processes.
In a gross analogy, when you go to a grocery store and you take time to pack your frozen goods in one bag, and your canned goods and your soft goods in other bags, you use this compartmentalization, understanding what the impact is of all of the wonderful mobility, balanced with compliance and security needs.
If you got home, and you've got canned goods in with your fruit, the reality is that you've not done a good job of compartmentalizing and understanding what the impact of one good might have on the other.
The same thing applies in the virtual world. If you don't take the time to go back to the basics, understanding the impact of the infrastructure and the changes -- you're going to be a world of hurt later, even if you get the cost benefits and all the wonderful agility and mobility.
We really approach it pragmatically in a rational manner, such that people understand both the pluses, the pros and the cons of virtualization in their environments.
Gardner: We've determined that virtualization is quite hot. It's ramping up quickly. A number of studies have shown a 50-70 percent increase in the use of virtualization in the last few years. Projections continue for very fast-paced growth.
We also see a number of organizations using multiple vendors, when it comes to virtualization. We've also discussed how security and complexity apply to this, and that you need a comprehensive or contextual view of what's going on with your systems -- particularly if you have a mixture of physical and virtual.
Let's look at some examples of how this has been mitigated, how the risk has actually been decreased, and how the fruits, if you will, of virtualization are enjoyed without the pitfalls.
Let's first go to Charu at VMware. Can you offer some examples of how people have used virtualization, done it the right away, avoided some of these pitfalls, and have gained the visibility and analytics and therefore helped with their matured approach to virtualization?
Chaubal: One thing we've done at VMware over the last year and a half is try to provide as much prescriptive guidance as we can. So a lot of securing of virtualization comes down to making sure you actually deploy it [properly].
So, one thing that we've done is created hardening guides that really aim to show customers how this can be done. That's proved to be very popular among our customers.
Not to get into too much detail, but one of the main issues is the fact that you have a virtualization layer that typically has a management interface in it. Then, you have the interface that goes into your virtual machines. People need to understand that this management layer needs to be completely separated from the actual production network.
That principle is manifested in different recommendations and scenarios, when you plan a deployment and configure it. That's just one example where customers have been able to make use of our prospective guidance. Then, they architect something that is actually much more secure than possibly they would have with some preconceived notions that they might have had. I think that's one area where we are seeing success.
Gardner: Let's go to LogLogic. Anton, give us some examples, actual companies or at least use-case scenarios, where the use of LogLogic, or the methodologies that it supports, have brought to bear on virtualization – to lower the cost, increased performance, gain higher utilization, and so forth -- but without some of these risks.
Chuvakin: I'll give an example of a retail company that was using LogLogic for compliance, as well for operational usage, such as troubleshooting their servers. This company, in a separate project, was implementing virtualization to convert some of their infrastructure to a virtual machine.
At some point, those two projects mainly had their log management to track operations to satisfy PCI requirements. These issues collided with the virtualization projects, and the company realized that they now have to not just collect logs from the physical infrastructure, but also from the virtual side that is now being built.
What happened was that the logs from the virtual infrastructure were also streamed into LogLogic. Now, LogLogic has the ability to collect any type of a log. In this case, we did use that capability to collect the log, which were at the time not even supported or analyzed by LogLogic.
The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."
We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.
In this case, it was actually a great success story because we used part of the LogLogic infrastructure that doesn't rely on any preconceived notions of what the logs are. Then, they built up on top of that to help them pinpoint the issues with their VMs to see who accesses the platforms, what applications people use to manage the environment, and, basically, to track all sorts of interest in events in their virtual infrastructure.
I have to admit that it wasn't really tested on their PCI yet, but I'm pretty confident that their PCI auditors will accept what they did for the virtual environment. And, they would satisfy the requirements of PCI, which calls for logging and monitoring, as well as the requirements in the compliance mandate.
At the same time, while they are building it for that use, their analysts are already trying to do searches and look certain things that might be out of order in their VM environment. An operational use-case spontaneously emerged, and now they not only have their own idea for what to look for, but also our content to do that.
Gardner: You bring up a point here that we shouldn't overlook. This isn't something that you just build and walk away from. It requires ongoing refinement tuning. The dynamic nature of virtualization, while perhaps automated in terms of allocating resources, is an overall process that needs to be managed in order for these business outcomes to be enjoyed.
Let's go back to Chris at Unisys. Tell us about the ongoing nature of virtualization. How do you keep on top of it? How do you keep it performing well, and perhaps even eke out more optimized utilization benefits?
Hoff: There's not a whole lot of difference in terms of how you might apply the same query to non-virtualized infrastructure. It's not a monolithic single-time event, but, as I alluded to in a previous answer, the next extension should be evolution along the continuum. That notion of real-time infrastructure really does take in the concept of a lot of tasks.
Today, we are quite operationally inefficient in doing that, both from the perspective of practice and infrastructure utilization, and really making sure that our infrastructure, and the compute and storage, and all of the things that go into, up in our infrastructure become much more efficient, for power, cost efficiency, utility, and flexibility.
When you unite all of those capabilities, what it's going to mean going forward is a much more rich methodology and model for taking business process and instantiating that as an expression of policy within your infrastructure. So, you can say the things that are most important to your business are these processes, and these services.
What you need to be able to do, and ultimately what it means to automation and the efficiency problems, is that the infrastructure needs to self-govern, self-provision and re-provision. You need to be to able to allocate cost back to your constituents, and it gets closer and closer to becoming a loose, but federated, group of services. It can essentially play and interact in real-time to service the needs of the business.
All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.
It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Gardner: In the way that you were describing this continuum, it almost sounds like you were alluding to cloud computing, as it's being defined more and more -- and perhaps the “private cloud,” where people would be managing their internal enterprise IT resources from a cloud perspective. Am I overstating it?
Hoff: No, I don't think you're overstating it. I think that's a reasonable assertion and assumption based on what I am saying. The difficulty in using the "cloud" word is that it means a lot of things to lots of people. I think you brought up three definitions in your one sentence.
But the notion of being able to essentially utilize our resources pretty much anywhere, regardless of who owns the infrastructure, is something that's enticing and brings up a host of wonderful issues that make security people like me itchy.
If you read Nicolas Carr's book The Big Switch, and you think about utility or grid computing or whatever you want to call it -- the notion of being able to better utilize my resources, balance that with security, and be very agile -- it's fun times ahead. You are absolutely right. I was alluding to the C-word, yes.
Gardner: Okay. Charu at VMware, given that organizations are at different rates of adoption around virtualization -- some are just starting to test the waters -- but the end goal for some of these adopters could be this cloud-compute value, this fabric of IT value.
How are people getting started, and how should they get started in a way that sets them up for this longer-term payoff?
Chaubal: That's a very broad question, but I think it is important that you can go in and use virtualization to consolidate physical servers on to smaller number of physical servers, and you get that savings that way. If that's the approach you take, you might end up at a dead-end, or you might get off on a tangent somewhere.
What we find is that there is really a maturity curve when it comes to virtualization adoption, and one of the most important axes along that curve is, in a broad sense, your operational maturity.
When you are starting out, sure, go ahead and consolidate servers. That's a good way to get some quick wins, but you're rapidly going to come to a point where you need to start to imposing an operational discipline and policies and procedures that perhaps you didn't have before.
Perhaps you had them, but they weren't all that rigidly adhered to or weren't really followed all the time. The most important thing is that you start thinking about this operational maturity, and then go to things like being able to standardize upon processes and standardize upon the way things are configured.
Any kind of process you do, make sure it goes through the right steps in terms of getting it approved. There is a whole methodology around that, and that's one of the things that we spend a lot of time with our customers.
We have this graph where, if you can look at how many servers are virtualized over time, we would like to see a steady upward 45-degree angle to that curve. If somebody virtualizes too many too soon, you will see that curve shoot up sharply. Then, you repeat yourself, because you virtualized so much so quickly, and all these other issues that Chris alluded to come into play, and they might bog you down.
On the other hand, you could suffer the other extreme where you virtualize so slowly, that the curve is very shallow, and you end up leaving savings and benefits on the table, because you are just picking them up so slowly.
Gardner: Missed opportunities, right?
Chaubal: Right, exactly. The most important thing, when you are starting out, is to keep that in mind that you are not just installing a piece of software that will optimize what you have already. It's really a fundamental transformation in how you do things.
Gardner: Okay, let's take the last question to Anton at LogLogic. How do you recommend people get started, particularly in reaching this balance between wanting not to miss opportunities, wanting to be able to ramp up quickly and to enjoy the benefits that virtualization provide, but doing it in such a way that they get that visibility and analytics, and can set themselves up to be risk resistant, but also strategic in their outlook?
Chuvakin: I'll use the case that I just presented to illustrate the way to do it. As has happened with me in technology before virtualization, people will sometimes deploy it in a manner that's really makes auditing and monitoring pretty hard. So they have to go back and figure out what the technologies are doing in terms of transparency and visibility.
I suggest that, as people deploy VMware and other virtualization platforms, they instantly connect those to their log-management tools, and that log collection starts day one.
Admittedly, most of those organizations would not know what to do with those logs, but having those logs as a first step will be important. Even if you don't know how to analyze the log, you don't know what they mean, or what they're trying to tell you, you still have that repository to fall back to.
If you have to investigate an issue, an incident, or an operational issue in an environment, you still have an ability to go back and say, "Oh, something of that sort already happened to me once. Let's see what else occurred at the same time."
Even if you have no skills to delve into the full scope of how to analyze all these signals that virtual infrastructure is sending us, I would focus first on selecting the data and having the data for analysis. When you do that, your future steps or your further steps, when you make sense of the data, will be much more easy, much more transparent, and much more doable overall.
You will have to learn what the signals are, what information is being emitted by your virtual infrastructure, and then make conclusions on that. But, to even analyze the information, to make conclusions, and to figure out what's going on, you have to have the original data.
It's easier to collect the data early, because it's really not a big deal. You just send those logs to LogLogic or the log management system, and they are capable of doing that right away. Now, admittedly, you have to pick a system, such as LogLogic, that can support your virtualization infrastructure and then you can build up your analysis and your understanding and build up your true visibility, sort of the next layer of the intelligence as you go. Don't try to use the analysis right away, but start collecting it day one.
Gardner: Right, visibility early and often. I appreciate your input. We have been talking about virtualization -- how to do it right, how to enjoy lower risk, understanding security implications, but at the same time moving aggressively as you can, because they are significant economic benefits.
Helping us understand virtualization in this context, we have been joined by Charu Chaubal, senior architect in technical marketing at VMware. Thank you, sir.
Chaubal: Thank you.
Gardner: Also Chris Hoff, chief security analyst at Unisys. I really appreciate your input, Chris.
Hoff: Thanks, very much.
Gardner: And also, Dr. Anton Chuvakin, chief logging evangelist and also a security expert at LogLogic. Thank you, sir.
Chuvakin: Thank you so much for inviting me.
Gardner: I would like to thank our sponsor for this podcast, LogLogic. This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a BriefingsDirect podcast. Thanks, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.
Transcript of BriefingsDirect podcast on the management and security challenges of virtualization. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
enterprise architecture,
Green IT,
loglogic,
software,
Unisys,
virtualization,
VMware
Subscribe to:
Posts (Atom)