Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas, Nevada the week of June 16, 2008.
Listen to the podcast here. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect podcast recorded live at the Hewlett-Packard Software Universe Conference in Las Vegas. We are here in the week of June 16, 2008. This sponsored HP Software Universe live podcast is distributed by BriefingsDirect Network.
Today, we welcome Ben Horowitz. He is the vice president and general manager of HP's Business Technology Optimization (BTO) software unit. Welcome to the show.
Ben Horowitz: Thanks very much, Dana, it's exciting to be here.
Gardner: I really enjoyed your presentation on the main stage this morning. As we are in Vegas, I thought you had a good stand-up comedian style. You had the audience in the palm of your hand.
Horowitz: Well, I appreciate that, but I'll try to leave the jokes on stage.
Gardner: Well, we try to make these podcasts entertaining too. One of the things that's a recurring theme here is, of course, data center transformation, helping enterprises move their data centers to a higher level of efficiency and also cut costs. Part of that equation requires that your applications are built well, too.
So, I want to talk a little bit about this whole notion of better practices and standard methodologies around good application development, testing, and deploying, more towards the lifecycle approach. One of the things that I have heard described from the BTO organization is this notion of application lifecycle optimization. I wonder if you could unpack that a little bit for us.
Horowitz: Sure, when you look at the history of applications, and Web applications in particular, originally it was great. There was this new way to develop the applications, it was much easier than the old way, and people developed a lot of applications quickly. Then, they put them out there, and the applications didn't work too well.
So, the first thing that companies went to tackle was testing, functional testing, and performance testing. Of course, HP owns the most famous product line in that space, the franchise that was Mercury, with our great quality center and performance center products, and those have been terrific.
We have heard from customers, as they have got more and more sophisticated, that what they'd really like to do is be able to map very, very precisely everything that they are doing with that application, from the point that they set the business priority.
So, there is something in the business that says, "We have to solve this problem. We have to provide the service, and therefore, we are going to create some functionality and build an application, test it, understand its performance characteristics, make sure it's secure, and then put it out in the environment. The question that we asked ourselves was, "How do you do that, and how do you make sure that you are aligned?"
Gardner: That raises this issue about boundaries. There have been boundaries between elements within application development, which application management techniques and products help fix, but there is a larger boundary between what happens on the application side and what happens on the operational side.
Now that we are in the era of service-oriented architecture (SOA) and virtualization, these boundaries no longer can stand. What is it that you are doing with your products and your announcements here at Software Universe that can help organizations overcome these inefficient boundaries?
Horowitz: First, we have done a great, new integration of our Project and Portfolio Management (PPM) software, which understands all of the business requirements and their priorities and the overall project status and project resourcing.
Then, we have taken that information and we have mapped it directly into quality center, into our requirements management module. So now, for every technical requirement that we go to test, we know exactly what the business driver for that was, and that becomes very, very powerful.
Then, those requirements feed in parallel into the quality organization, as well as into the development organization. By doing that, we support a much more agile project process where the quality guys are coming up to speed, on point with the developers.
And then, on the back end, we've then integrated our new Application Security Center, an offering that we acquired with SPI Dynamics, into Quality Center. So, in a single place you can test -- you can do functional testing, quality testing, as well security testing.
So, at the end of the cycle, you have a completely tested product where you have a total understanding of where you are, versus the business requirements, and in fact, we enable you to generate a contract with the business that declares an understanding of the level of quality that you have achieved at the point of release.
Gardner: So, ameliorating the boundaries means bringing more people in earlier in the process, but doing it in a way that doesn't create chaos, that is organized through a workflow. Everyone's ultimately on the same page, but early enough in the process, where you can have an impact on the overall application.
Horowitz: Right, exactly. What we were doing is saying, "All these things that used to be conversations that may or may not have happened, and were never recorded, now become a part of a very simple work flow that basically ensures that you have alignment from end to end. So whatever you wanted to build, that's what you actually end up building.
And, having everyone in the organization in the same page is it just tremendous -- not only a time saver, but also building the right thing in software development is probably the biggest thing that distinguishes good organizations from bad. The ones that solve the right problems are generally the ones who are successful.
Gardner: Another thing we are seeing in the market is the need for a lifecycle for applications that almost leave them in perpetual development mode. Increasingly with agile and with services, an application doesn't just go out into production and stay there for years at a time. How do the feedback loops work between requirements, deployment, and then refinement?
Horowitz: One of the great things that we have in the new integration that we have between PPM and Quality Center is that, when we go to manage the many, many incoming requests for changes (RFCs), we do that in the context of the business priorities.
So, by knowing and having right there all of the overall business priorities in conjunction with all of the change requests, and all the technical risk assessments, what is it going to cost us? How important is it to the business? And how many request we have on it all go into the picture. Then, we are able to quickly figure out, "Okay, here are the changes that we ought to make to the application that are low enough risk and high enough pay off, that we think they make sense."
Then, once we have developed that, and we are ready to release, we have tight integration with our operational software. So, all of the things that we know at the time that we tested that change go into operation. So if you know, "Gee, once we get a million users, this change is going to cause the application to break," that's known on the operational side, so that they can be proactive in managing the consequences.
Gardner: One of the announcements that you've made here at the event that caught my attention was the move towards a federated configuration management capability, that uses connectors and modules, and basically brings more information about the systems into a place where it can be viewed, I guess, moving towards greater visibility. Tell us a little bit about how greater visibility into what's going on within these systems ultimately helps with the total lifecycle benefits?
Horowitz: This is a very interesting and important question. There are lots of parts to the answer, but the first thing that people need to know about it is in HP Business Availability Center (BAC) we can tell you "Gee, this application is getting slow. Something about it is slow." And, that's good to know. But, the next thing that you might want to know is, "Did Opsware change anything in that application, that might have caused it to get slow?"
In order to answer that question, Opsware has to have the same view, i.e. have the same definition of what BAC thought was slow as BAC does. With the configuration management database (CMDB), that's the kind of thing that we are able to do. Similarly, if you want to open an incident on that application, the service manger has to have that same definition.
That's the kind of high-level problem that we are trying to solve. Now, the way that we have done it with a federated configuration management system is unique in the industry.
We have seen approaches from certain competitors for certain products, whose names will go unmentioned, who have come up with the idea that, "If we just had all of the data in the same place, then the customer could do whatever they wanted with it." That seems like a good idea, but it turns out to be quite a bad idea.
The reason is that all of the data and all of the products have been optimized over many, many years for performance -- tons and tons, hours and hours, and probably hundreds of man-years -- into making that data very easy to retrieve for the people who need to use it in that context.
What the competitors do is say, "Well, that performance optimization will be an exercise for our customers. They will have to figure how to make the data perform." That has proven to be pretty much an everlasting job. Gartner reports that only 4 percent of these CMDB implementations have succeeded, and that those 4 percent probably were not trying to do anything too ambitious.
With our federated approach, we say, “Let the data live where the data lives. Let the products do what they know how to do. Get the benefit of all that performance engineering that's been done over all the years." What we provide is essentially a map or a directory to all of the bits of information in a reconciled fashion about, for example, a server.
In our product line we have server information stored in Opsware, stored in BAC, stored in HP OpenView, stored in Service Manager, and the CMDB is able to get those data sets from all those locations and reconcile them, so that you have a single notion of that server.
Gardner: This strikes me as something that can open up even wider federation. Think about all the information for governance, for example, in a registry/repository. Think about the information that's available through service level agreement policy engines. We can maybe start to break down the boundaries at yet another abstraction, another level. Does that make sense?
Horowitz: Yes, definitely. What we really have is the "master join," for those who know relational databases, amongst all of the data and all of the products. Here is a way that's very high performance, where we have done a tremendous amount of work on making it really easy to integrate. That can be a central way to get all of the data about the various things -- everything from a service level agreement (SLA), to a server, to a network device, to an application -- that you have running in your environment. That's something that nobody has and everybody would like. So, it's great to able to ship that.
Gardner: Another announcement today was an alignment with VMware on some products. Tell us a little bit about first the market opportunity for the virtualization space and how management is an important element for people to actually attain the goals that they now fully understand with virtualization, and then how that relates back into our discussion about breaking down boundaries and finding more of the lifecycle benefit across development and design time into runtime in operations.
Horowitz: Virtualization is probably the most important mega trend in the data center right now. All the customers that we talk to are moving pretty aggressively towards a virtualized environment. That environment provides a ton of benefits, which is why everybody is going there, but it also creates a whole set of new challenges around management. Now, you've got another layer of abstraction. You've got another really complex piece of software to manage in the hypervisor, in that the software needs everything from patches to configuration changes and to upgrades.
You have to understand how all of that works together. By working with VMware, which really is the product leader in the space, we are able to bring all of the value of HP software to the virtualized environment.
So, it's great for the VMware customers, in that they get a real first-class management system that seamlessly moves across virtual and physical environments, and servers and network and storage. It's great for us, because it means that our customers, as they add VMware into their environment, have a solution that already works.
We think that everybody is going to be really excited about it. It's an R&D relationship, it's not a marketing relationship, so we think that we are going to get really good product results out of it, and it will be terrific for our customers.
Gardner: Well great. We really appreciate your time. We've had a conversation with Ben Horowitz, the vice president and general manager of HP's BTO software unit. You've had a busy schedule. I appreciate you taking some time.
This comes to you as a sponsored HP Software Universe live podcast recorded at the Venetian Resort in Las Vegas Nevada. Look for other podcast from this HP event at www.hp.com, under "Software Universe Live Podcasts," as well as, through the BriefingsDirect Network. I would like to thank our producers on today’s show, Fred Bals and Kate Whalen, and also our sponsor Hewlett-Packard.
I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening. and come back next time for more in-depth podcasts on enterprise software infrastructure and strategies. Bye for now.
Listen to the podcast. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference, in Las Vegas. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Tuesday, July 08, 2008
HP SOA Products Director Tim Hall on New Business Drivers and Efficiency Benefits From SOA
Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas, Nevada the week of June 16, 2008.
Listen to the podcast here. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect podcast recorded live at the Hewlett-Packard Software Universe Conference in Las Vegas. We are here in the week of June 16, 2008. This sponsored HP Software Universe live podcast is distributed by BriefingsDirect Network.
We now welcome to the show Tim Hall. He is the director of HP's SOA Center products. Welcome to the show.
Tim Hall: Thanks, Dana.
Gardner: We are going to talk about, fittingly enough, service-oriented architecture (SOA), the products, the market, some of the underlying trends, both from the business viewpoint and technologies that are driving SOA adoption.
I suppose SOA is at somewhat of a crossroad. We have seen a lot of pilot and project-based adoption. People were expecting to see more holistic, deep and wide SOA methodologies brought into play, but there has been an awful lot on the plate of CIOs and architects these days.
They are now thinking about data-center transformation and next-generation adoption for virtualization, higher utilization, and lower costs. They are also dealing with some of the issues around energy and power. They are being asked to modernize legacy applications. There's an awful lot going on, but SOA can be an enabler and an aid to that. Why don’t you tell us a little bit about, how you see SOA moving into the mainstream?
Hall: From our perspective, we think SOA is an application-development philosophy, and that philosophy is really the backplane, or an enabler, of lots of different and interesting trends.
You mentioned that modernization is a key one. Lots of old mainframe programmers like myself are retiring, and we've got lots of customers now looking to migrate to more modern architectures, Java platforms, Microsoft technologies, and maybe replacing with updated packaged apps from Oracle or SAP -- those are the two major package players these days. You know, both of those vendors, are building custom software, and SOA is the way in which they are doing this.
It also is an enabler for things like Web2.0 or mashups that are created from the information and the capabilities that are exposed through those services. SOA is, as I said, a philosophy about how we are building applications. We want to be service-oriented in those services that are consumed across a wide variety of composite applications that can be built. So, when you think about things like virtualization at the hardware level, SOA allows you to do that at the software level.
Other trends that are intersecting here include software-as-a-service (SaaS). They have service in the name. The question is how much of it are you consuming. And, are they also exposing services that you can actually integrate with things you have in house. All of these things are connected, but I think SOA is really this backplane to enable all of these pieces. Whether we are talking about it as a first-class citizen, or just the way in which the work is being done now, is the point of view about it coming into the mainstream.
Gardner: A lot of the discussion here at Software Universe has centered around the need to bring together what happens in design time, with what happens on the operation side, in run time, and in production. And, it seems to be one of the key assets of SOA methodology in adoption is a creation of registry/repository, a really powerful information source for policies, SLAs, and use patterns.
When you start looking what they are doing with federated configuration management database (CMDB) on the ops side, when you look at policy engines, you start to see how some of these federated data sources can be brought together to create more of that lifecycle approach. Tell us a little about the registry/repository, and how key it is for people in a IT role, not just for SOA, but perhaps for more?
Hall: I think that's a great topic, especially from HP's perspective. You know, there are lots of different information sources that you have in IT. There isn't just one, and if you think you are only going to have one, we think that's the wrong approach, and customers have played this out.
I think what you are getting at to a certain degree is to looking at the adoption of two different trends that are going on. One trend that's impacting the operation teams is the establishment of configuration management database, and the adoption of the Information Technology Infrastructure Library (ITIL), and IT service management on top of that information. That's one trend.
By the way with ITIL version 3, all of those processes now are oriented around the notion of service delivery, which now fits very nicely with the notion of the application teams building these next-generation applications that are also service oriented.
And, you know what? Most enterprise architects and most operations folks don't normally get along. But, maybe with these two converging trends, we have an opportunity for these folks to understand each others' motivation. We are talking about the same kinds of terminology. Suddenly, there is this shared understanding that we might actually get some work done.
So, there are these different authoritative sources of information that we see being established, some within ops, some within app development, quality management repositories, enterprise directories with identities, and they are all related, all stitched together.
The point on the application side is the more and more structured information that we are creating and putting around applications, the more automation that you can drive through the entire lifecycle. This includes enriching information like what's in the CMDB, by providing federated access to all of these different information sources. So from the SOA perspective, having a SOA-centric artifact repository as an authoritative source for those documents, is absolutely critical.
The registry is yet another place to discover and point you to where those two different authoritative sources actually live. So, in some respects, the registry can become the federation master, if you will, telling me, if I want to find the configuration management database, where is that? If I want to find the SOA artifact repository, where is that? They can point you in all those different directions.
Gardner: I suppose also, in aligning with SOA, approaches and methodologies, the enterprise service bus (ESB) becomes a way in which some of these policies can be instantiated. A messaging bus is not only the way in which the information is available. There is federation. There is a relationship between these different repositories, but then the actual execution of that can happen.
I think the point what we are trying to get to here is that SOA may have been given short shrift in terms of its role for cost efficiency and productivity, if you think it only in terms of application services reuse and compositing. When you look at in the context of IT lifecycle, and the full opportunity to create much more efficiency in the IT operations, it looks a little bit prettier.
Hall: Absolutely right, and I think if you look at what HP is assembling in terms of our software portfolio, there are some logical connections you can draw. First, as you are building more of these structured artifacts and linking them together in terms of the lifecycle, what can we do in terms of things like automated deployment? And, if I've captured information about the environment that the service is going to deploy in, the policy, the run time policies and the associated policy enforcement points are going to be responsible for executing those policies, be they hardware or software based.
Clearly, we have a leg up in understanding all of those elements, bringing Opsware into the fold and looking at how we can take the whole stack soup to nuts and automate both the deployment, as well as enriching the information in CMDB. Those are sort of the conclusions that you can draw from all the different elements that we have in the portfolio.
I actually think it's a discussion you can have about application architecture in general. You can say, "What are the best practices that we are learning out of the SOA approach that we might want to apply to other types of applications that we are deploying? What other structured artifacts shall we will we be creating to help us drive that kind of automation?"
Gardner: And, of course, the major trend that people are talking about and starting to move toward is virtualization. It's another layer of complexity, but if you've got those assets in place, the backplane ESB is doing management on a automated basis through policies and governance criteria that are already embedded in these data repositories. The whole notion of scaling virtualization for very dramatic cost savings becomes a bit more or less scary.
Hall: That's absolutely right, and, again, these trends are all collected, they support each other, and they can be composited and built on each other. I mean, virtualization is not a new topic. I was having a conversation with some folks yesterday about shouldn't we take the OSI seven-layer stack and talk about each layer of that stack. Put the word virtualization next to it, and then describe at that layer, network layer, application layer, operating system layers, what does this mean? What capabilities you are getting out?
I think, what you are alluding to is that customers have some confusion about what am I virtualizing at what layer and what do I get as a result of doing that? But I think it would be a very powerful discussion topic or a discussion slide to have with customers as they are trying to decide what the benefits are in each one of those layers?
Gardner: As we started out saying, there is an awful lot for IT departments to bite off and chew these days. A few years ago, I had customers come and say, "Okay, what we do first to get ready for SOA?" The big thing to do is get your data act together. Get a data service's layer, because it's data services that will be probably most important and beneficial to consume through your SOA infrastructure.
Well, now I am thinking that at even higher abstraction, you've got to get your whole SOA infrastructure and approach going, so that you can then be in a position to take advantage of this larger IT lifecycle.
Hall: I'll be a little controversial. We actually don't think that that's the right approach -- for SOA adoption, at least. After seven years of kind of playing in the space, we have seen most customers be successful, when the first thing they do is decompose their business, and not worry about the technology.
Actually for most customers that we talk to their first problem in terms of SOA adoption is when they've driven it from the bottom up, meaning, they try to have the technologists drag the decisions about data services or selecting ESB without even understanding the requirements to drive that kind of decision.
The most successful adoption that we have seen and the highest number of benefits is when they take a business-focused approach. Let's decompose a business. Do we really understand what it means in the IT world to be a service provider? When I say service provider, I mean in the classic sense of a telecom. Somebody once asked me if you are successful with SOA, what is my IT shop going to look like? I said, you are going to look a lot like a telecom provider, and they looked at me very puzzled. Then I said, you are becoming the dial tone of the business by providing all of these services, and that means you have to be available 24X7.
Think about what that means to be a truly carrier-grade IT service provider and that's a catchphrase that I like to use to get that conversation going.
Gardner: Sure, and then to bring that into some of the newer hype curve of activity lately around cloud computing. What you are describing is where people are beginning to identify as a private cloud.
Hall: Yes, that's absolutely correct.
Gardner: Tell us a little bit about what you think a private cloud and SOA do together?
Hall: Cloud is the next new, new thing. There was a blog I was reading the other day, that said, SOA was the boring cousin of Web 2.0. Now, I am thinking, after seven years, of doing this for the boring cousin. Very interesting, but cloud is the next new thing that people are talking about.
How can I get these compute resources and how can I get access to them from wherever I am in the world? I think there is some very interesting models being put together, such as Amazon's S3 model, and now I see businesses tapping into that, and using that as the means for scaling up and scaling out their environment, without ever having to have touched the hardware infrastructure operating systems.
Gardner: And, developers using it to put their apps through their paces on a performance-testing basis before they ever put in production. They try it on Amazon Web Services.
Hall: Absolutely, and performance validation, by the way, is one of those things for SOA, which I believe is absolutely critical, and yet, who is involved in that? Is it architects? Is it your quality management professionals?
Normally, there is a performance validation team that's absolutely world class within organizations to understand how to do that kind of scale up of individual apps? Now they are applying those means and methods to services. So, cloud is really one of those cool, new buzz words that we are hearing about. Private cloud? Sure. I definitely see that it's an evolution of how do we turn all of the assets in IT into services, and now we are saying, hardware is a service.
Gardner: Okay, so what's interesting is the relationship between these trends and how it really starts to point to a larger goal for business transformation and IT service management in the transformative implications with that. It seems that IT is becoming more a fabric of a company, rather than a second thought or a supplier. It's not really a supplier -- it doesn't really do IT justice anymore.
How do IT professional in these IT departments begin to think of themselves, actually recast their role and their position, their culture, to take on perhaps a much larger role in these companies?
Hall: I think one of the messages you see from HP software is that we are not talking about information technologies any more. You know, back in the day, it was data processing, right? We are now talking about business technology, and we are saying, "How do we optimize the outcomes of applying technologies in the context of business?" And, our message is, IT is a strategic weapon. The folks that were in IT, we are transforming them to be in BT now, and the more their companies are able to look at applying technologies in new and unique ways, this is absolutely their strategic differentiator in the market.
Gardner: Well great. I think we have covered quite a bit, and all the pieces are not quite in place, but once people see the vision, and they've got a stake in the ground. It really helps rally the troops and put together your requirements of how to get to where you want to be. So, we are going to thank Tim Hall, he is the director of HP's SOA Center Products. We appreciate your time.
Hall: Thanks very much, Dana.
Gardner: This comes to you as a sponsored HP Software Universe live podcast recorded at the Venetian Resort in Las Vegas. Look for other podcast from this HP event at www.hp.com, under "Software Universe Live Podcasts," as well as, through the BriefingsDirect Network. I would like to thank our producers on today’s show, Fred Bals and Kate Whalen, and also our sponsor Hewlett-Packard.
I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise software infrastructure and strategies. Bye for now.
Listen to the podcast. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast here. Sponsor: Hewlett-Packard.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you're listening to a special BriefingsDirect podcast recorded live at the Hewlett-Packard Software Universe Conference in Las Vegas. We are here in the week of June 16, 2008. This sponsored HP Software Universe live podcast is distributed by BriefingsDirect Network.
We now welcome to the show Tim Hall. He is the director of HP's SOA Center products. Welcome to the show.
Tim Hall: Thanks, Dana.
Gardner: We are going to talk about, fittingly enough, service-oriented architecture (SOA), the products, the market, some of the underlying trends, both from the business viewpoint and technologies that are driving SOA adoption.
I suppose SOA is at somewhat of a crossroad. We have seen a lot of pilot and project-based adoption. People were expecting to see more holistic, deep and wide SOA methodologies brought into play, but there has been an awful lot on the plate of CIOs and architects these days.
They are now thinking about data-center transformation and next-generation adoption for virtualization, higher utilization, and lower costs. They are also dealing with some of the issues around energy and power. They are being asked to modernize legacy applications. There's an awful lot going on, but SOA can be an enabler and an aid to that. Why don’t you tell us a little bit about, how you see SOA moving into the mainstream?
Hall: From our perspective, we think SOA is an application-development philosophy, and that philosophy is really the backplane, or an enabler, of lots of different and interesting trends.
You mentioned that modernization is a key one. Lots of old mainframe programmers like myself are retiring, and we've got lots of customers now looking to migrate to more modern architectures, Java platforms, Microsoft technologies, and maybe replacing with updated packaged apps from Oracle or SAP -- those are the two major package players these days. You know, both of those vendors, are building custom software, and SOA is the way in which they are doing this.
It also is an enabler for things like Web2.0 or mashups that are created from the information and the capabilities that are exposed through those services. SOA is, as I said, a philosophy about how we are building applications. We want to be service-oriented in those services that are consumed across a wide variety of composite applications that can be built. So, when you think about things like virtualization at the hardware level, SOA allows you to do that at the software level.
Other trends that are intersecting here include software-as-a-service (SaaS). They have service in the name. The question is how much of it are you consuming. And, are they also exposing services that you can actually integrate with things you have in house. All of these things are connected, but I think SOA is really this backplane to enable all of these pieces. Whether we are talking about it as a first-class citizen, or just the way in which the work is being done now, is the point of view about it coming into the mainstream.
Gardner: A lot of the discussion here at Software Universe has centered around the need to bring together what happens in design time, with what happens on the operation side, in run time, and in production. And, it seems to be one of the key assets of SOA methodology in adoption is a creation of registry/repository, a really powerful information source for policies, SLAs, and use patterns.
When you start looking what they are doing with federated configuration management database (CMDB) on the ops side, when you look at policy engines, you start to see how some of these federated data sources can be brought together to create more of that lifecycle approach. Tell us a little about the registry/repository, and how key it is for people in a IT role, not just for SOA, but perhaps for more?
Hall: I think that's a great topic, especially from HP's perspective. You know, there are lots of different information sources that you have in IT. There isn't just one, and if you think you are only going to have one, we think that's the wrong approach, and customers have played this out.
I think what you are getting at to a certain degree is to looking at the adoption of two different trends that are going on. One trend that's impacting the operation teams is the establishment of configuration management database, and the adoption of the Information Technology Infrastructure Library (ITIL), and IT service management on top of that information. That's one trend.
By the way with ITIL version 3, all of those processes now are oriented around the notion of service delivery, which now fits very nicely with the notion of the application teams building these next-generation applications that are also service oriented.
And, you know what? Most enterprise architects and most operations folks don't normally get along. But, maybe with these two converging trends, we have an opportunity for these folks to understand each others' motivation. We are talking about the same kinds of terminology. Suddenly, there is this shared understanding that we might actually get some work done.
So, there are these different authoritative sources of information that we see being established, some within ops, some within app development, quality management repositories, enterprise directories with identities, and they are all related, all stitched together.
The point on the application side is the more and more structured information that we are creating and putting around applications, the more automation that you can drive through the entire lifecycle. This includes enriching information like what's in the CMDB, by providing federated access to all of these different information sources. So from the SOA perspective, having a SOA-centric artifact repository as an authoritative source for those documents, is absolutely critical.
The registry is yet another place to discover and point you to where those two different authoritative sources actually live. So, in some respects, the registry can become the federation master, if you will, telling me, if I want to find the configuration management database, where is that? If I want to find the SOA artifact repository, where is that? They can point you in all those different directions.
Gardner: I suppose also, in aligning with SOA, approaches and methodologies, the enterprise service bus (ESB) becomes a way in which some of these policies can be instantiated. A messaging bus is not only the way in which the information is available. There is federation. There is a relationship between these different repositories, but then the actual execution of that can happen.
I think the point what we are trying to get to here is that SOA may have been given short shrift in terms of its role for cost efficiency and productivity, if you think it only in terms of application services reuse and compositing. When you look at in the context of IT lifecycle, and the full opportunity to create much more efficiency in the IT operations, it looks a little bit prettier.
Hall: Absolutely right, and I think if you look at what HP is assembling in terms of our software portfolio, there are some logical connections you can draw. First, as you are building more of these structured artifacts and linking them together in terms of the lifecycle, what can we do in terms of things like automated deployment? And, if I've captured information about the environment that the service is going to deploy in, the policy, the run time policies and the associated policy enforcement points are going to be responsible for executing those policies, be they hardware or software based.
Clearly, we have a leg up in understanding all of those elements, bringing Opsware into the fold and looking at how we can take the whole stack soup to nuts and automate both the deployment, as well as enriching the information in CMDB. Those are sort of the conclusions that you can draw from all the different elements that we have in the portfolio.
I actually think it's a discussion you can have about application architecture in general. You can say, "What are the best practices that we are learning out of the SOA approach that we might want to apply to other types of applications that we are deploying? What other structured artifacts shall we will we be creating to help us drive that kind of automation?"
Gardner: And, of course, the major trend that people are talking about and starting to move toward is virtualization. It's another layer of complexity, but if you've got those assets in place, the backplane ESB is doing management on a automated basis through policies and governance criteria that are already embedded in these data repositories. The whole notion of scaling virtualization for very dramatic cost savings becomes a bit more or less scary.
Hall: That's absolutely right, and, again, these trends are all collected, they support each other, and they can be composited and built on each other. I mean, virtualization is not a new topic. I was having a conversation with some folks yesterday about shouldn't we take the OSI seven-layer stack and talk about each layer of that stack. Put the word virtualization next to it, and then describe at that layer, network layer, application layer, operating system layers, what does this mean? What capabilities you are getting out?
I think, what you are alluding to is that customers have some confusion about what am I virtualizing at what layer and what do I get as a result of doing that? But I think it would be a very powerful discussion topic or a discussion slide to have with customers as they are trying to decide what the benefits are in each one of those layers?
Gardner: As we started out saying, there is an awful lot for IT departments to bite off and chew these days. A few years ago, I had customers come and say, "Okay, what we do first to get ready for SOA?" The big thing to do is get your data act together. Get a data service's layer, because it's data services that will be probably most important and beneficial to consume through your SOA infrastructure.
Well, now I am thinking that at even higher abstraction, you've got to get your whole SOA infrastructure and approach going, so that you can then be in a position to take advantage of this larger IT lifecycle.
Hall: I'll be a little controversial. We actually don't think that that's the right approach -- for SOA adoption, at least. After seven years of kind of playing in the space, we have seen most customers be successful, when the first thing they do is decompose their business, and not worry about the technology.
Actually for most customers that we talk to their first problem in terms of SOA adoption is when they've driven it from the bottom up, meaning, they try to have the technologists drag the decisions about data services or selecting ESB without even understanding the requirements to drive that kind of decision.
The most successful adoption that we have seen and the highest number of benefits is when they take a business-focused approach. Let's decompose a business. Do we really understand what it means in the IT world to be a service provider? When I say service provider, I mean in the classic sense of a telecom. Somebody once asked me if you are successful with SOA, what is my IT shop going to look like? I said, you are going to look a lot like a telecom provider, and they looked at me very puzzled. Then I said, you are becoming the dial tone of the business by providing all of these services, and that means you have to be available 24X7.
Think about what that means to be a truly carrier-grade IT service provider and that's a catchphrase that I like to use to get that conversation going.
Gardner: Sure, and then to bring that into some of the newer hype curve of activity lately around cloud computing. What you are describing is where people are beginning to identify as a private cloud.
Hall: Yes, that's absolutely correct.
Gardner: Tell us a little bit about what you think a private cloud and SOA do together?
Hall: Cloud is the next new, new thing. There was a blog I was reading the other day, that said, SOA was the boring cousin of Web 2.0. Now, I am thinking, after seven years, of doing this for the boring cousin. Very interesting, but cloud is the next new thing that people are talking about.
How can I get these compute resources and how can I get access to them from wherever I am in the world? I think there is some very interesting models being put together, such as Amazon's S3 model, and now I see businesses tapping into that, and using that as the means for scaling up and scaling out their environment, without ever having to have touched the hardware infrastructure operating systems.
Gardner: And, developers using it to put their apps through their paces on a performance-testing basis before they ever put in production. They try it on Amazon Web Services.
Hall: Absolutely, and performance validation, by the way, is one of those things for SOA, which I believe is absolutely critical, and yet, who is involved in that? Is it architects? Is it your quality management professionals?
Normally, there is a performance validation team that's absolutely world class within organizations to understand how to do that kind of scale up of individual apps? Now they are applying those means and methods to services. So, cloud is really one of those cool, new buzz words that we are hearing about. Private cloud? Sure. I definitely see that it's an evolution of how do we turn all of the assets in IT into services, and now we are saying, hardware is a service.
Gardner: Okay, so what's interesting is the relationship between these trends and how it really starts to point to a larger goal for business transformation and IT service management in the transformative implications with that. It seems that IT is becoming more a fabric of a company, rather than a second thought or a supplier. It's not really a supplier -- it doesn't really do IT justice anymore.
How do IT professional in these IT departments begin to think of themselves, actually recast their role and their position, their culture, to take on perhaps a much larger role in these companies?
Hall: I think one of the messages you see from HP software is that we are not talking about information technologies any more. You know, back in the day, it was data processing, right? We are now talking about business technology, and we are saying, "How do we optimize the outcomes of applying technologies in the context of business?" And, our message is, IT is a strategic weapon. The folks that were in IT, we are transforming them to be in BT now, and the more their companies are able to look at applying technologies in new and unique ways, this is absolutely their strategic differentiator in the market.
Gardner: Well great. I think we have covered quite a bit, and all the pieces are not quite in place, but once people see the vision, and they've got a stake in the ground. It really helps rally the troops and put together your requirements of how to get to where you want to be. So, we are going to thank Tim Hall, he is the director of HP's SOA Center Products. We appreciate your time.
Hall: Thanks very much, Dana.
Gardner: This comes to you as a sponsored HP Software Universe live podcast recorded at the Venetian Resort in Las Vegas. Look for other podcast from this HP event at www.hp.com, under "Software Universe Live Podcasts," as well as, through the BriefingsDirect Network. I would like to thank our producers on today’s show, Fred Bals and Kate Whalen, and also our sponsor Hewlett-Packard.
I'm Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time for more in-depth podcasts on enterprise software infrastructure and strategies. Bye for now.
Listen to the podcast. Sponsor: Hewlett-Packard.
Transcript of BriefingsDirect podcast recorded at the Hewlett-Packard Software Universe Conference in Las Vegas. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
Dana Gardner,
data center,
guerilla soa,
Hewlett-Packard,
HP,
Interarbor,
ITIL,
Softtware Universe,
software,
Tim Hall
Wednesday, June 11, 2008
Live TIBCO Panel Examines Role and Impact of Service Performance Management in Enterprise SOA Deployments
Transcript of BriefingsDirect podcast on service performance management recorded live at TUCON 2008 in San Francisco on April 30, 2008.
Listen to the podcast here. Sponsor: TIBCO Software.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about service performance management in support of service-oriented architecture (SOA).
We are here live at the TUCON 2008 conference, TIBCO Software’s user event in San Francisco, to look into the issues around SOA integrity, particularly in the context of widespread enterprise use, the myriad demands that are going to be put on services, and how infrastructure is going to need to adapt and perform in a way that probably has not been the case for infrastructure up until now.
Helping us to weed through service performance management and how it relates to SOA governance and other issues of total architecture, we are joined by a panel of industry analysts, experts and representatives from TIBCO.
Let's start by introducing our panel. We are joined by Joe McKendrick, an independent analyst and SOA blogger. Welcome to the show, Joe.
Joe McKendrick: Hi, Dana, happy to be here.
Gardner: We are also joined by Sandy Rogers, the program director for SOA, Web services and integration at IDC. Welcome, Sandy.
Sandy Rogers: Thanks, Dana.
Gardner: We are also joined by Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co. Welcome to the show, Anthony.
Anthony Abbattista: It’s good to be here, Thanks.
Gardner: And also joining us, Rourke McNamara, director of product marketing for TIBCO Software. Welcome Rourke.
Rourke McNamara: Thank you, Dana.
Gardner: We saw and listened to some presentations this morning at the TIBCO conference. One of the things that struck me is this notion of pulling together what had been not only in disparate technology silos -- but really joining what had been in functional and organizational silos. Particularly, I mean the design and process creation phases that we have heard so much about with SOA.
And then how that relates to the secondary aspect of functional activities, which is the operations -- keeping the trains running on time, and making sure that service-level agreements (SLAs) are met. That means making sure that users get very fine-grained services coming through uninterrupted in aggregated applications -- without hiccups, without slowdowns.
These processes are moving to mission-critical, and so there needs to be more opportunity for these two aspects of SOA, design and operations, to work together. Performance management of services gives more insight into what takes place beneath those services, and is, therefore, becoming essential.
First, let's take a look at this landscape of what's going on in SOA, and why, as we move toward enterprise-wide deployment, service performance management becomes so important.
Sandy at IDC, what do you see in terms of enterprises that are early adopters of SOA? How concerned are they that, when they throw the switch, so to speak, with these composite business processes -- made up of services from a variety of different sources with a variety of different support infrastructure -- that they really feel confident that this is going to hold up in real world situations?
Rogers: What I find is interesting is that even if you have one service that you have deployed, you need to have as much information as possible around how it is being used and how the trending is happening regarding the up-tick in the consumption of the service across different applications, across different processes.
So, most organizations need to present an environment where individuals and stakeholders in the company feel more comfortable in relying on services and also allowing others to potentially handle the operational dynamics of those services, once they are in production.
They need a lot more visibility and an understanding of the strains that are happening on the system, and they need to really build up a level of trust. Once they can add on to the amount of individuals that have that visibility, that trust starts to develop, more reuse starts to happen, and it starts to take off.
Eventually they get to a stage, where they are concerned about the scalability and how far they can push the limits of these deployments. It could be the way that they’ve designed it architecturally, or it could be just that they are getting familiar with the new technologies to support SOA infrastructure.
Gardner: It seems that at the very time when SOA is putting more emphasis on a diversified portfolio of services -- repurposing those services, extending visibility -- that, at the same time, IT as an organization is being tasked with behaving more maturely as a business within a business.
The Information Technology Infrastructure Library (ITIL) and other compliance standards are being placed on IT departments, so they behave more like we would expect a human resources department to behave. Let's go to Joe McKendrick.
Joe, just now that we are looking at the need for IT to perform like a mature business, is there a risk here of finger-pointing -- that when something goes wrong and so many constituents are involved with the support of a service, that no one will really be able to take responsibility?
McKendrick: Yes, that’s been a problem all along. There is always a lot of finger-pointing, and IT tends to get blamed for everything. Sandy made an excellent point that the foundation of SOA is trust.
The business units are being asked to sign on to a SOA to provide support, and perhaps even some of them to provide funding, and they are looking to the services that they will consume to be scalable, looking to the services to have uptime to be available, perhaps 24x7. And if this trust is not there, the SOA, the whole foundation of the SOA, breaks down and IT will get the blame again.
It very much hinges on IT and performance management. We're actually talking about two levels here, governance and performance management. They are integrated, and they need each other. But governance deals more with how the business addresses SOA. Performance management is an IT challenge and is rightly put into the IT "sphere of influence."
Gardner: I was struck, when I heard Anthony’s presentation this morning, by your example of what things used to be like, where you would get 40 people on a conference call when something went wrong, and you would be yelling out URLs in order to find the right server to either shut down or replace.
What's the issue from your perspective now on solving this issue when things go wrong? Is this something that we can rely on people to solve, or do we need to move more toward a systems-based approach?
Abbattista: First, I'd like to wind through an earlier question you asked. When we went to SOA, when we put in our enterprise service bus (ESB), and when we chose TIBCO for our bus, a lot of people thought of SOA as, "Well, I am just going to construct some WSDL and call some SOAP or HTTP, and that’s SOA."
But the first thing we did is talk through the governance part of why we want to "get on the toll road" and "pay a toll for the bus," and really that became the consistency in measurement and governance, and lets us operate the things once we have created them.
So the first thing we had to do was get through the whole idea of that. It was worth it, and it wasn’t a matter of whether the bus would work or not. For the first year-and-a-half that we put our ESB in and we started to market services on it, we would hear the words, "TIBCO is down."
It didn’t matter whether the back-end service is down. It didn’t matter whether the mainframe was broken, they would say "TIBCO is down." We finally started to get the root cause, saying, "No, so and so service is down." The basis for us having good measurement of performance is helping to "pay the toll," of getting on the bus and actually having measurement points that are well understood.
I also don’t agree exactly that governance is a business-unit thing. Governance for us is also a lot about the SLAs around the services, of having good expectations up front about how they will behave and how they will be called. That way, we have a benchmark or baseline to compare ourselves to on this. All of sudden, if we get a 100,000 calls a day to something that is designed for, or is expected to have, 1,000 -- we at least understand what to be looking for.
Gardner: Let's provide a level-set for our listeners. You are representing Allstate, which is a very large organization, with 17 million customers, $156 billion in assets. Give us a sense of the scale that we are talking about in terms of your IT organization?
Abbattista: Our claims organization, for example, has an IT shop of about 400 people that are employees, and we are not counting offshore or other people to support that. Each of our business units is a substantial IT shop in of itself, each with 500 to 1,000 people.
Then, what we choose to federate becomes an issue, because they need to talk to each other. They need to talk to themselves. They need to talk to the outside world. So what we layer then in my area is an infrastructure of components on how to do those tasks.
The massiveness of it is how do you measure and monitor that to get end-to-end composite services that we really can monitor and supply a good customer experience from? The massiveness is amazing. We have about 5,000 servers -- UNIX, Windows, mainframes, AS400s -- we have them all at this point.
Gardner: How many services do you have that have to "pay their toll" on the service bus, so to speak?
Abbattista: About 750.
Gardner: Wow! That’s pretty good.
Abbattista: We actually front our document management services and collapse all that into Oracle, but we fronted that with TIBCO. We did that so that we would have the measurement from day one, and it’s worked amazingly well.
People argued it would be just as easy to shove the document to the database and make an HTTP-SOAP call, but this governed ESB approach has paid off a 1,000 times over, because we now predicatively know when something is going awry.
Gardner: All right, now let's go to Rourke. We understand that enterprises are hesitant about going toward SOA on a holistic basis, if they haven’t got performance backstops in place. We are a little bit weary of finger pointing, because there is such a complex stew of components and services that makes it very difficult after the fact to point out and say who is responsible.
And, we're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids? Give us a little bit of level-set about what's important to consider when moving toward a solution before the fact?
McNamara: SOA, virtualization, and governance -- all of these technologies have pluses and minuses. And, on the whole, when you finish computing out the equation, you are definitely on the plus side, you are definitely on the positive side.
But, you need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.
Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.
What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.
You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.
Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.
The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?
You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Gardner: So after the fact of failures, in trying to do forensics and root cause analysis and putting more agents and agent-less systems in place, if it's all telling you what's wrong after the fact that it’s wrong, it’s probably too late.
McNamara: Absolutely.
Gardner: How do we get to this vision of proactive, anticipatory systems awareness via service performance management? Let me first take this to Sandy. How important is it for us to get to this sense that something isn't quite right, in advance of it failing?
Rogers: Obviously, there are different use cases and different companies that are really interested in that dynamic, autonomic type of environment, where you can adjust to the demands of the environment, but we are also becoming much more Web-based.
What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?
So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.
But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.
Gardner: So, the proposition remains, how do you do that? It’s clear that you want to get out in front of these problems, but with so many interdependencies, the large scale in number of services, different environments, probably inside and outside the organization, it raises questions. How do we move up in abstraction toward understanding the context of an entire business process, in order to go back and look for the signals that will tell us when something is approaching a breakdown, or when we need to provision more hardware and software resources?
Let me take this to you, Anthony. Where do you think that abstraction needs to be in order to forecast appropriately issues of SOA integrity?
Abbattista: I'll go back to the point on having some expectations or benchmarks of how the service should run when it’s designed and deployed in the first place. Then, you can understand if your baseline is correct and then, over time, you can look for fragmented behavior. But, I do think you need some level of end-to-end view of the process and of who is the customer on the end.
Ultimately, where these things show up en masse is at the end-points, and typically that’s in the consumer space, as we are frustrating an employee or someone on a website with a bad client experience. Those are unforgivable.
So, starting with the customer at the end-point of that business process and looking at some of those interactions, is part and parcel of deploying the service in the first place. If you don’t do that, you will be chasing your tail for the rest of your life in operations, until you go back and do that mapping. So I think it pays to do it upfront.
Gardner: You mentioned in your presentation that the "Walls must come down" between IT operations and development-deployment-requirements-test functions. It sounds like you're also saying it needs to go from end-to-end, beyond just that wall, but also across the entire event-processing landscape.
Abbattista: In that respect, I view our function in running the applications and supplying the applications as a utility. It's our job to point back to the groups that deploy the stuff. If I let them deploy junk, I am as complicit in that junk being delivered as anybody else. That’s a responsibility we take seriously. If you're going to put it in the shop and expect us to run it, I won't take junk.
Gardner: Right. So there is the adage of, "Garbage in, garbage out." Now, if garbage appears anywhere in the context of a complex process, it's garbage out. That’s even more difficult.
Let's go to Joe McKendrick. Tell us about the concept of complex event processing (CEP). How do you get any handle on a process? Do you look for the description of the process from a modeling perspective, through what's been done on the ESB, all of the above?
McKendrick: Definitely all of the above. CEP is something that’s just coming into the SOA realm. It is said that that’s the next phase for SOA. As was pointed out this morning, real time is not enough for a business. Business needs to be able to react and predict.
Rourke and I were talking about that a little bit earlier. You need to be able to predict what's going to happen, not only in the business, but in the systems. TIBCO is making some progress in this area in terms of being able to predict when the system may go down or when there will be spikes in demand. Predictive analytics, which is a subset of business intelligence (BI), is now moving into the systems management space.
Gardner: We're actually moving above the systems management space by an abstraction level or two. Let's go back to Rourke. You had a couple of product enhancement announcements today here at the TIBCO conference. You are getting out in front of service performance management, and your interest is to accomplish some of the things we have been describing, provide what the market is demanding for SOA in order to be trusted.
Tell us about CEP and why that is an important part of this predictive solution.
McNamara: One of our customers said it best last night over dinner, when I introduced the concept of the product I am going to mention in just a second. They saw immediately what problem it solved for them.
They said that their biggest fear is that their SOA initiative will be a victim of its own success. A service will be reused so many times so rapidly that the hardware it's deployed on, the manner in which it was deployed, won't be able to handle the load. That service, which is now used in a dozen different business applications, or exposed in a dozen different business applications, will go down or will degrade in its performance level.
That could make SOA a victim of its own success. They will have successfully sold the service, had it reused over and over and over and over again. But, then, because of that reuse, because they were successful in achieving the SOA dream, they now are going to suffer. All that business users will see from that is that "SOA is bad," it makes my applications more fragile, it makes my applications slow down because so many people are using the same stuff.
Gardner: What is it about CEP that gives us more visibility at the right abstraction, so that we can predict among all of these different complex components and assets where a problem is developing?
McNamara: The key is that we just can’t simply wait for the problem to develop or the problem to happen, because it will happen very quickly. We won't have a week’s warning, a month’s warning, or even necessarily a few hours’ warning. And we won't understand, when we deploy that service, all the places or all the manners in which it will used. So, we need to be able to predict these problems before they occur and do something to prevent those problems from occurring.
TIBCO is taking our CEP technology, the business events technology that we have, and applying the problem to our internal software, our infrastructure, the same way our customers apply it to their business problems.
We are using business events to monitor what's going on with service load and performance -- what the load profiles look like in a given organization, allowing it to understand some of the programs and marketing efforts that are going on within that company. Then, when it sees that a service load is approaching a dangerous level; when it sees that based on the events that are occurring that the service will become overloaded and will violate its SLAs, it’s able to tell other parts of the infrastructure to take action to prevent that problem.
Gardner: Let me see if I understand this. This sounds like a schematic about a business process and, by reverse engineering from that process down to the constituent ingredients to support it, you can predict where the loads will be building or will become erratic. Therefore, you can also detect what's going on within that system, put the two together, and come up with a heads-up?
McNamara: That’s exactly right. You need to understand what the interdependencies are between your services and what the load characteristics of the different component parts in that dependency graph in that environment are. Then, based on that, you need to understand what sort of events in your business or in your IT infrastructure will cause performance problems or overload conditions.
Gardner: Let's go back to Sandy. You mentioned earlier about how to automate toward these goals. It sounds like it’s going to be a bit of journey to get to full automation. On the other hand, having 40 people on a conference call to try to manually bear-wrestle these problems down doesn’t work either. How do we find a balance between too much automation, automation that can’t be attained, and purely manual, after-the-fact approaches?
Rogers: Everyone has to walk before they run with any type of new technology implementation. But, we are finding that most organizations are keying in on those services that are most important, and making sure that they are instrumented appropriately regarding the technologies that support management as being able to define what those thresholds are.
Being able to correlate those thresholds to real business needs and business value -- that’s one of the interesting things about what we were doing in a service level. We can start to associate the services that are most relevant and what there are going to have the most impact for.
We can make sure that information that is contained either in the payload or form the service itself as provided. So, you have that insight. I think that organizations are starting to realize that, in order to prove the value of the services, in order to prove that the value of having this level of coordination around management, they need to be able to make that association.
From an inventing point of view, what’s interesting is that there are a lot of parallel types of processing going on in this environment. Rather than wait until something happens in some linear, straight-through process, we're seeing the ability to watch and correlate some of those events vis-Ã -vis the thresholds, understand which thresholds are the most important, and start automating how to define the behavior, how the system is going to react to those conditions, and do it from a cost-benefit perspective in moving forward.
Gardner: Okay, so companies can take this approach, use a moderate pace, learn as they go, and use complex event processing to offer insights into the context of what’s going on. But, if human nature is any indication, people usually react to whatever the rules are about their job, and for IT this is going to be the view from the SLAs.
It strikes me that what’s going to happen is a lot of these organizations are going to reverse-engineer from the SLA, and that the rules and the models in the SLA become extremely important. Am I going out on the limb here, Anthony, or do you think that it will pan out that the SLAs will be the rules that the service performance management then needs to line up around?
Abbattista: That’s right. Again, it's back to what do you expect, and are you living up to it? You talk about failure not being from the SOA, but we could have a case where a service got deployed, people learned about it, and, before you know it, we are taking 100,000 hits a day on a service that no body ever gave any design thought to.
I would have to reach in there and get some agent information once in a while. And all the sudden, the supplier of the service, who did us a favor, put this on the bus, and then did a point-to-point interface, calls up and says, "Help!"
Someone might publish this thing and it had no modeling, because they thought it was some low-volume thing and it wasn’t important. All of a sudden, it becomes important because everybody found it. So, as we get to composite services, SOA performance is about the service expectation.
Gardner: And the governance?
Abbattista: The governance, and do you let them do it? Do you have governors? Do you have a cost model that burdens the caller, rather than the supplier? These are real questions we'll get into, and they are why I was talking about breaking down the walls. If it's truly a valuable service, then it's my job to figure out and pay for upgrades -- or to help you redesign it.
We take very much an advocate approach to, "Okay, if you come on the bus, we will help you with being successful." And the SLA is the baseline for that. But it also sets up that, "Hey, did you do a good enough job? And what if you are wildly successful?"
Gardner: Right. Let's throw this back to TIBCO. There's clearly a need in the market for a full lifecycle approach, feedback loops, many moving parts. What is it that you can do from a product perspective that helps get to that level of automation? That, in a sense, fills in the cracks about whom and what performs some of these necessary communications between the operations side and those associated with the ongoing requirements?
McNamara: Taking a step back, TIBCO offers a single user interface from the business analyst all the way through to the operational administrators who are running our applications. The idea is that, when you sit down to build out your services, when you sit down to build out your business processes, you use one tool to define what the business processes look like, what the touch points are between folks. Then, that diagram gets handed off from the business analyst to the implementer, who sits down and actually builds the services or builds the business process management (BPM) process that meets those requirements.
There is a direct link between the two. There is a round-tripping built into that tool, largely because it's a single data model and a single user interface with different views for people with different roles in your enterprise. That’s one major thing we do to help facilitate that communication, and that’s part of what we call the TIBCO ONE initiative. The product in question is the TIBCO Business Studio product, which forms that single user interface.
Gardner: And you’ve got hooks in a lot of the other parts of the SOA infrastructure for service enablement and delivery. How do you pull these parts together in a concerted effort?
McNamara: The other side of things is, even once you’ve built things out and deployed things to production, you need to make sure you can keep track, because a number of the folks on this panel have said exactly what’s going on. Ideally, you want to identify early on, as Sandy and Anthony said, which services are important to your enterprise and which services will have heavy load.
Unfortunately, you can’t always do that. Sometimes a little service, as Anthony said, where you think it's just helpful turns out to be a service used in 60 percent of the applications you are deploying. All of a sudden, you’ve got an issue.
You need to understand what the usage characteristics are on your services, not just the designed usage characteristics on your services. We’ve embedded both policy and performance management capabilities in our underlying service infrastructure. All the TIBCO ActiveMatrix products, all of our SOA enablement products, will transparently monitor for performance and usage of the services deployed in that environment.
Anything that you build in TIBCO ActiveMatrix BusinessWorks, ActiveMatrix Service Bus, ActiveMatrix Service Grid, and so on, is automatically monitored. And, you can automatically do some things around policy and access and control and rules.
So, even if you build that little service and you don’t think it's important, and you don’t want to go to the extra trouble to build some governance into it, it's there. It's already been embedded in that infrastructure. When you need it, you can just turn it on and make use of it, and you will automatically have some information about how people are using it with a fairly nice visual dashboard?
The key here is not just the ability to see some numbers in a report, because people miss that. You can have a report on, as Anthony said, more than 750 services running. By going through the performance numbers on each of those services on a regular basis, things get lost if it's just numbers. You need to have very good visualization tool, so you can see in "living color" what's going on with those services and how that relates to the SLAs and the rules you’ve set -- the expectations you’ve set for those services.
Gardner: All right, let's go back to Allstate. Anthony you’ve heard the announcements today, you’ve understood this vision, and you understand the need very well. Do you think that we are getting very close to realizing more of an automated approach to service performance management in an SOA environment?
Abbattista: Yes, we are getting closer and making rapid strides. We need to be careful though. We are being careful to manage the service deployment, the service bus grid, and the parts about how to operate it. What makes me a little nervous or restless is the idea that we start taking all that back into the system parameters and the Java environments and Oracle databases, and that sort of thing. I would hate to see us not solve this first.
I really don’t think we’re at a stage where I want to automatically be adjusting heap sizes in Java virtual machines, or Oracle database parameters, which could be a next logical extension. I did see a little twinkle in people's eyes today, when they looked at products like the BMC Suite and Matrix. I don’t know that I want to have system programmer types around, trying to debug the debugging environment. I think it could become very complicated, very quickly.
Gardner: So we need to keep this at that higher abstraction in order to appreciate the whole and not get down into the weeds?
Abbattista: That’s my belief. I would say that if this service is not performing, then maybe we get the three people on the phone, the database administrator, the platform person, and the network person -- and we take a look at it. But I don’t think we should drill too far into that, until we solve the other layer.
Gardner: I suppose the good news and bad news about all of this is that the metrics for success or failure will be quite evident. You are not going to be able to cover this up across a service-support environment and the business processes that those contribute to, if it doesn’t work. Any failures are going to be readily apparent, not just to a systems administrator, but also to the entire organization that’s affected.
Joe, let's go to you on this whole notion of metrics of success. We have seen some caution, but we also see great promise around SOA. If we got into an economic environment where the pressure becomes higher for better productivity -- of doing more with less -- it's likely we are going to see more companies look to virtualization, outsourced services, software as a service. When do you think the switch on wider SOA use will get thrown, and to what degree does service performance management contribute to that?
McKendrick: Wow, that’s the $64-billion question. It's interesting, I was speaking with the enterprise architect for a major distribution company a little bit earlier. She pointed out to me that, when they started out their service enablement years ago, even before Web services came on the scene and evolved over the past 10 years, they built their infrastructure to be service-enabled from the get-go.
There was no effort to identify what can be service enabled and try to build a service around and try to get acceptance of it. And I asked her, "Well, what do you consider to be success in terms of adoption of the SOA, and in terms of reuse -- and do you even measure a reuse success?"
Basically, to that company, if a service gets reused, fine. If it doesn’t get reused at all, that’s fine too. It doesn’t matter. The reason I'm bringing that up is because reuse is often brought up as the ultimate metric for a SOA success, as the most tangible metric, I should say. But, I think the best approach is to design applications or pieces of applications from the initial start to be service-enabled and employing the latest standards.
Gardner: Okay, so the risks are high, the rewards are high. It sounds like we are getting closer to a less manual, more automated approach, something that has visibility and hooks up and down, deep and wide. Let's wrap up with some last thoughts on this subject.
Sandy, if you are a CIO, a decision maker in the enterprise, and you are listening to this, what do you think that you want to hear that’s going to make you confident, given that you’ve already made a lot of investments in services-enablement? You have to recognize that this is the way for the future, but what are you going to want to put in place in order to start protecting yourself when it comes to your performance management?
Rogers: What we are seeing with IT executives today is a real interest in leveraging what you have, of being able to have speed for deployment, not having to worry about all of the issues, and to have people on board that understand all of the technical dynamics of how everything needs to be implemented from an infrastructure point of view.
So, they need to be able to support fast time to market, and not worry about throwing something out there. When you are deploying it, you have to step back and make sure all of the resources that you need are lined up to make that happen. You want to have an automated way to handle deployment, to handle governance, to handle all of these different issues.
There is also the self-service nature that’s starting to happen -- the ability to create services and allow anyone in the enterprise to be able to get at the information they need as quickly as possible, not have to have a whole army of developers out there. That means you need to feel comfortable that you are creating an infrastructure that could be consumed by multiple parties.
Setting up that infrastructure is really going to save cost. It's going to save time to market, and you need that level of assurance, so that you don’t need to baby sit every single service. There is also an issue of being able to outsource to different parties. You want to be able to leverage that, cost-effectively.
You need to set up an infrastructure, all the processes and rules that everyone needs to follow. And by doing that, you can now leverage whatever resources you want to develop and create what's necessary, and not have to worry about everyone falling in line and having their own infrastructure and having all of that reference architecture put together at each different resource.
It’s really that whole concept of creating a centralized type of platform and a framework to consume all these services. It’s going to be very, very important going forward. Everyone is talking about the issues of the economy, and it’s really the trade-offs of what do you need to do in order to move forward and think about things in more of a total cost of ownership (TCO) manner versus that of direct return on investment (ROI) -- that immediate cost-per-service type of measurement.
Gardner: It sounds like we are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA? I am throwing that out to Anthony at Allstate.
Abbattista: It’s interesting to think of it as insurance. I think it’s a necessary operational device, for lack of better words.
Gardner: Service performance management -- not an option?
Abbattista: I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now that's on insurance -- it’s not an option not to do it.
Gardner: Last words from you, Rourke? Do you view this as an insurance policy? I guess you have the choice of different insurers, right?
McNamara: I do. I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.
It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?
Gardner: Okay, now we are going to take some questions from the audience.
Tony Baer: This is Tony Baer with OnStrategies. I want to seize on something that Anthony Abbattista from Allstate had mentioned before, which is that you hope that service performance management doesn’t degrade into getting down to "Java heap sizes." I surely don’t blame you on that one, but what I am wondering is, at what point does this become an IT service management issue?
Abbattista: Because we have gathered that responsibility together, I guess it all falls under one roof in our particular organization. I would think it was external services. One thing we are doing is measuring some of our external providers outside the organization. I guess it’s sort of the same phone call. You are calling yourself or you are calling the person who is responsible and holding him accountable. So, I don’t know it changes much.
McNamara: I would like to add something to that. With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.
They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.
It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.
Rogers: What I find is that it’s inevitable that we are going to go down that path, but standards between the systems that do IT management traditionally and this level of detail really haven’t been fleshed out. Most organizations are looking for a single, unified type of dashboard on some of the key indicators. They might want to have that for the operations team that has traditionally run IT service management.
A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.
So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose. What's interesting is in some of the areas of virtualization technologies, where you might have some level of management that’s abstracted from the physical infrastructure, and then you have this level of abstracted management of services how they come together. It hasn't really been defined in the industry, but down the road -- two, three, four, five years from now -- I think you will be seeing a lot more around that.
McKendrick: Let me add that we're still in the very early stages of SOA. In fact, a lot of companies out there think they have SOA, when they actually have just the bunch of Web services, JBoss architecture, and point-to-point types of interfaces and implementations. A lot of companies are just starting to get their arms around exactly what SOA is and what it isn't.
Gardner: Very good. We have been discussing the issues around service performance management for SOA environments. We are talking with a panel of industry analysts and practitioners. I want to thank our panelists, Joe McKendrick, Sandy Rogers, Anthony Abbattista, and Rourke McNamara. Thanks.
This is Dana Gardner, principal analyst at Interarbor Solutions, and you have been listening to a sponsored BriefingsDirect podcast. Thanks and come back next time.
Listen to the podcast here. Sponsor: TIBCO Software.
Transcript of BriefingsDirect podcast on service performance management recorded live at TUCON 2008 in San Francisco on April 30, 2008. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast here. Sponsor: TIBCO Software.
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect. Today, a sponsored podcast discussion about service performance management in support of service-oriented architecture (SOA).
We are here live at the TUCON 2008 conference, TIBCO Software’s user event in San Francisco, to look into the issues around SOA integrity, particularly in the context of widespread enterprise use, the myriad demands that are going to be put on services, and how infrastructure is going to need to adapt and perform in a way that probably has not been the case for infrastructure up until now.
Helping us to weed through service performance management and how it relates to SOA governance and other issues of total architecture, we are joined by a panel of industry analysts, experts and representatives from TIBCO.
Let's start by introducing our panel. We are joined by Joe McKendrick, an independent analyst and SOA blogger. Welcome to the show, Joe.
Joe McKendrick: Hi, Dana, happy to be here.
Gardner: We are also joined by Sandy Rogers, the program director for SOA, Web services and integration at IDC. Welcome, Sandy.
Sandy Rogers: Thanks, Dana.
Gardner: We are also joined by Anthony Abbattista, the vice president of enterprise technology strategy and planning for Allstate Insurance Co. Welcome to the show, Anthony.
Anthony Abbattista: It’s good to be here, Thanks.
Gardner: And also joining us, Rourke McNamara, director of product marketing for TIBCO Software. Welcome Rourke.
Rourke McNamara: Thank you, Dana.
Gardner: We saw and listened to some presentations this morning at the TIBCO conference. One of the things that struck me is this notion of pulling together what had been not only in disparate technology silos -- but really joining what had been in functional and organizational silos. Particularly, I mean the design and process creation phases that we have heard so much about with SOA.
And then how that relates to the secondary aspect of functional activities, which is the operations -- keeping the trains running on time, and making sure that service-level agreements (SLAs) are met. That means making sure that users get very fine-grained services coming through uninterrupted in aggregated applications -- without hiccups, without slowdowns.
These processes are moving to mission-critical, and so there needs to be more opportunity for these two aspects of SOA, design and operations, to work together. Performance management of services gives more insight into what takes place beneath those services, and is, therefore, becoming essential.
First, let's take a look at this landscape of what's going on in SOA, and why, as we move toward enterprise-wide deployment, service performance management becomes so important.
Sandy at IDC, what do you see in terms of enterprises that are early adopters of SOA? How concerned are they that, when they throw the switch, so to speak, with these composite business processes -- made up of services from a variety of different sources with a variety of different support infrastructure -- that they really feel confident that this is going to hold up in real world situations?
Rogers: What I find is interesting is that even if you have one service that you have deployed, you need to have as much information as possible around how it is being used and how the trending is happening regarding the up-tick in the consumption of the service across different applications, across different processes.
So, most organizations need to present an environment where individuals and stakeholders in the company feel more comfortable in relying on services and also allowing others to potentially handle the operational dynamics of those services, once they are in production.
They need a lot more visibility and an understanding of the strains that are happening on the system, and they need to really build up a level of trust. Once they can add on to the amount of individuals that have that visibility, that trust starts to develop, more reuse starts to happen, and it starts to take off.
Eventually they get to a stage, where they are concerned about the scalability and how far they can push the limits of these deployments. It could be the way that they’ve designed it architecturally, or it could be just that they are getting familiar with the new technologies to support SOA infrastructure.
Gardner: It seems that at the very time when SOA is putting more emphasis on a diversified portfolio of services -- repurposing those services, extending visibility -- that, at the same time, IT as an organization is being tasked with behaving more maturely as a business within a business.
The Information Technology Infrastructure Library (ITIL) and other compliance standards are being placed on IT departments, so they behave more like we would expect a human resources department to behave. Let's go to Joe McKendrick.
Joe, just now that we are looking at the need for IT to perform like a mature business, is there a risk here of finger-pointing -- that when something goes wrong and so many constituents are involved with the support of a service, that no one will really be able to take responsibility?
McKendrick: Yes, that’s been a problem all along. There is always a lot of finger-pointing, and IT tends to get blamed for everything. Sandy made an excellent point that the foundation of SOA is trust.
The business units are being asked to sign on to a SOA to provide support, and perhaps even some of them to provide funding, and they are looking to the services that they will consume to be scalable, looking to the services to have uptime to be available, perhaps 24x7. And if this trust is not there, the SOA, the whole foundation of the SOA, breaks down and IT will get the blame again.
It very much hinges on IT and performance management. We're actually talking about two levels here, governance and performance management. They are integrated, and they need each other. But governance deals more with how the business addresses SOA. Performance management is an IT challenge and is rightly put into the IT "sphere of influence."
Gardner: I was struck, when I heard Anthony’s presentation this morning, by your example of what things used to be like, where you would get 40 people on a conference call when something went wrong, and you would be yelling out URLs in order to find the right server to either shut down or replace.
What's the issue from your perspective now on solving this issue when things go wrong? Is this something that we can rely on people to solve, or do we need to move more toward a systems-based approach?
Abbattista: First, I'd like to wind through an earlier question you asked. When we went to SOA, when we put in our enterprise service bus (ESB), and when we chose TIBCO for our bus, a lot of people thought of SOA as, "Well, I am just going to construct some WSDL and call some SOAP or HTTP, and that’s SOA."
But the first thing we did is talk through the governance part of why we want to "get on the toll road" and "pay a toll for the bus," and really that became the consistency in measurement and governance, and lets us operate the things once we have created them.
So the first thing we had to do was get through the whole idea of that. It was worth it, and it wasn’t a matter of whether the bus would work or not. For the first year-and-a-half that we put our ESB in and we started to market services on it, we would hear the words, "TIBCO is down."
It didn’t matter whether the back-end service is down. It didn’t matter whether the mainframe was broken, they would say "TIBCO is down." We finally started to get the root cause, saying, "No, so and so service is down." The basis for us having good measurement of performance is helping to "pay the toll," of getting on the bus and actually having measurement points that are well understood.
I also don’t agree exactly that governance is a business-unit thing. Governance for us is also a lot about the SLAs around the services, of having good expectations up front about how they will behave and how they will be called. That way, we have a benchmark or baseline to compare ourselves to on this. All of sudden, if we get a 100,000 calls a day to something that is designed for, or is expected to have, 1,000 -- we at least understand what to be looking for.
Gardner: Let's provide a level-set for our listeners. You are representing Allstate, which is a very large organization, with 17 million customers, $156 billion in assets. Give us a sense of the scale that we are talking about in terms of your IT organization?
Abbattista: Our claims organization, for example, has an IT shop of about 400 people that are employees, and we are not counting offshore or other people to support that. Each of our business units is a substantial IT shop in of itself, each with 500 to 1,000 people.
Then, what we choose to federate becomes an issue, because they need to talk to each other. They need to talk to themselves. They need to talk to the outside world. So what we layer then in my area is an infrastructure of components on how to do those tasks.
The massiveness of it is how do you measure and monitor that to get end-to-end composite services that we really can monitor and supply a good customer experience from? The massiveness is amazing. We have about 5,000 servers -- UNIX, Windows, mainframes, AS400s -- we have them all at this point.
Gardner: How many services do you have that have to "pay their toll" on the service bus, so to speak?
Abbattista: About 750.
Gardner: Wow! That’s pretty good.
Abbattista: We actually front our document management services and collapse all that into Oracle, but we fronted that with TIBCO. We did that so that we would have the measurement from day one, and it’s worked amazingly well.
People argued it would be just as easy to shove the document to the database and make an HTTP-SOAP call, but this governed ESB approach has paid off a 1,000 times over, because we now predicatively know when something is going awry.
Gardner: All right, now let's go to Rourke. We understand that enterprises are hesitant about going toward SOA on a holistic basis, if they haven’t got performance backstops in place. We are a little bit weary of finger pointing, because there is such a complex stew of components and services that makes it very difficult after the fact to point out and say who is responsible.
And, we're dealing with organizations like Allstate, which have massive size and scale, with 750 services. What do people need to be considering, as we moving into to yet more complexity with virtualization, cloud computing, utility grids? Give us a little bit of level-set about what's important to consider when moving toward a solution before the fact?
McNamara: SOA, virtualization, and governance -- all of these technologies have pluses and minuses. And, on the whole, when you finish computing out the equation, you are definitely on the plus side, you are definitely on the positive side.
But, you need to make sure that, as you move from the older ways of doing things -- from the siloed applications, the siloed business unit way of doing things -- to the SOA, services-based way of doing things, you don’t ignore the new complexities you are introducing.
Don’t ignore the new problems that you are introducing. Have a strategy in place to mitigate those issues. Make sure you address that, so that you really do get the advantage, the benefits of SOA.
What I mean by that is with SOA you are reusing services. You are making services available, so that that functionality, that code, doesn’t need to be rewritten time and time again. In doing so you reduce the amount of work, you reduce the cost of building new applications, of building new functionality for your business organization.
You increase agility, because you have reduced the amount of time it takes to build new functionality for your business organization. But, in so doing, you have taken what was one large application, or three large applications, and you have broken them down into dozens or tens of separate smaller units that all need to intercommunicate, play nice with each other, and talk the same language.
Even once you have that in production, you now have a greater possibility for finger-pointing, because, if the business functionality goes down, you can’t say that that application that we just put on is down.
The big question now is what part of that application is down? Whose service is it? Your service, or someone else’s service? Is it the actual servers that support that? Is it the infrastructure that supports that? If you are using virtualization technology, is it the hardware that’s down, or is it the virtualization layer? Is it the software that runs on top of that?
You have this added complexity, and you need to make sure that doesn’t prevent you from seeing the real benefit of doing SOA.
Gardner: So after the fact of failures, in trying to do forensics and root cause analysis and putting more agents and agent-less systems in place, if it's all telling you what's wrong after the fact that it’s wrong, it’s probably too late.
McNamara: Absolutely.
Gardner: How do we get to this vision of proactive, anticipatory systems awareness via service performance management? Let me first take this to Sandy. How important is it for us to get to this sense that something isn't quite right, in advance of it failing?
Rogers: Obviously, there are different use cases and different companies that are really interested in that dynamic, autonomic type of environment, where you can adjust to the demands of the environment, but we are also becoming much more Web-based.
What we are seeing is that, as services are exposed externally to customers, partners, and other systems, it affects the ability to fail-over, to have redundant services deployed out, to be able to track the trends, and be able to plan, going forward, what needs to be supported in the infrastructure, and to even go back to issues of funding. How are you going to prove what's being used by whom to understand what's happening?
So, first, yes, it is visibility. But, from there, it has to be about receiving the information as it is happening, and to be able to adjust the behavior of the services and the behavior of the infrastructure that is supporting. It starts to become very important. There are levels of importance in criticality with different services in the infrastructure that’s supporting it right now.
But, the way that we want to move to being able to deploy anywhere and leverage virtualization technologies is to break away from the static configuration of the hardware, to the databases, to where all this is being stored now, and to have more of that dynamic resourcing. To leverage services that are deployed external to an organization you need to have more real-time communication.
Gardner: So, the proposition remains, how do you do that? It’s clear that you want to get out in front of these problems, but with so many interdependencies, the large scale in number of services, different environments, probably inside and outside the organization, it raises questions. How do we move up in abstraction toward understanding the context of an entire business process, in order to go back and look for the signals that will tell us when something is approaching a breakdown, or when we need to provision more hardware and software resources?
Let me take this to you, Anthony. Where do you think that abstraction needs to be in order to forecast appropriately issues of SOA integrity?
Abbattista: I'll go back to the point on having some expectations or benchmarks of how the service should run when it’s designed and deployed in the first place. Then, you can understand if your baseline is correct and then, over time, you can look for fragmented behavior. But, I do think you need some level of end-to-end view of the process and of who is the customer on the end.
Ultimately, where these things show up en masse is at the end-points, and typically that’s in the consumer space, as we are frustrating an employee or someone on a website with a bad client experience. Those are unforgivable.
So, starting with the customer at the end-point of that business process and looking at some of those interactions, is part and parcel of deploying the service in the first place. If you don’t do that, you will be chasing your tail for the rest of your life in operations, until you go back and do that mapping. So I think it pays to do it upfront.
Gardner: You mentioned in your presentation that the "Walls must come down" between IT operations and development-deployment-requirements-test functions. It sounds like you're also saying it needs to go from end-to-end, beyond just that wall, but also across the entire event-processing landscape.
Abbattista: In that respect, I view our function in running the applications and supplying the applications as a utility. It's our job to point back to the groups that deploy the stuff. If I let them deploy junk, I am as complicit in that junk being delivered as anybody else. That’s a responsibility we take seriously. If you're going to put it in the shop and expect us to run it, I won't take junk.
Gardner: Right. So there is the adage of, "Garbage in, garbage out." Now, if garbage appears anywhere in the context of a complex process, it's garbage out. That’s even more difficult.
Let's go to Joe McKendrick. Tell us about the concept of complex event processing (CEP). How do you get any handle on a process? Do you look for the description of the process from a modeling perspective, through what's been done on the ESB, all of the above?
McKendrick: Definitely all of the above. CEP is something that’s just coming into the SOA realm. It is said that that’s the next phase for SOA. As was pointed out this morning, real time is not enough for a business. Business needs to be able to react and predict.
Rourke and I were talking about that a little bit earlier. You need to be able to predict what's going to happen, not only in the business, but in the systems. TIBCO is making some progress in this area in terms of being able to predict when the system may go down or when there will be spikes in demand. Predictive analytics, which is a subset of business intelligence (BI), is now moving into the systems management space.
Gardner: We're actually moving above the systems management space by an abstraction level or two. Let's go back to Rourke. You had a couple of product enhancement announcements today here at the TIBCO conference. You are getting out in front of service performance management, and your interest is to accomplish some of the things we have been describing, provide what the market is demanding for SOA in order to be trusted.
Tell us about CEP and why that is an important part of this predictive solution.
McNamara: One of our customers said it best last night over dinner, when I introduced the concept of the product I am going to mention in just a second. They saw immediately what problem it solved for them.
They said that their biggest fear is that their SOA initiative will be a victim of its own success. A service will be reused so many times so rapidly that the hardware it's deployed on, the manner in which it was deployed, won't be able to handle the load. That service, which is now used in a dozen different business applications, or exposed in a dozen different business applications, will go down or will degrade in its performance level.
That could make SOA a victim of its own success. They will have successfully sold the service, had it reused over and over and over and over again. But, then, because of that reuse, because they were successful in achieving the SOA dream, they now are going to suffer. All that business users will see from that is that "SOA is bad," it makes my applications more fragile, it makes my applications slow down because so many people are using the same stuff.
Gardner: What is it about CEP that gives us more visibility at the right abstraction, so that we can predict among all of these different complex components and assets where a problem is developing?
McNamara: The key is that we just can’t simply wait for the problem to develop or the problem to happen, because it will happen very quickly. We won't have a week’s warning, a month’s warning, or even necessarily a few hours’ warning. And we won't understand, when we deploy that service, all the places or all the manners in which it will used. So, we need to be able to predict these problems before they occur and do something to prevent those problems from occurring.
TIBCO is taking our CEP technology, the business events technology that we have, and applying the problem to our internal software, our infrastructure, the same way our customers apply it to their business problems.
We are using business events to monitor what's going on with service load and performance -- what the load profiles look like in a given organization, allowing it to understand some of the programs and marketing efforts that are going on within that company. Then, when it sees that a service load is approaching a dangerous level; when it sees that based on the events that are occurring that the service will become overloaded and will violate its SLAs, it’s able to tell other parts of the infrastructure to take action to prevent that problem.
Gardner: Let me see if I understand this. This sounds like a schematic about a business process and, by reverse engineering from that process down to the constituent ingredients to support it, you can predict where the loads will be building or will become erratic. Therefore, you can also detect what's going on within that system, put the two together, and come up with a heads-up?
McNamara: That’s exactly right. You need to understand what the interdependencies are between your services and what the load characteristics of the different component parts in that dependency graph in that environment are. Then, based on that, you need to understand what sort of events in your business or in your IT infrastructure will cause performance problems or overload conditions.
Gardner: Let's go back to Sandy. You mentioned earlier about how to automate toward these goals. It sounds like it’s going to be a bit of journey to get to full automation. On the other hand, having 40 people on a conference call to try to manually bear-wrestle these problems down doesn’t work either. How do we find a balance between too much automation, automation that can’t be attained, and purely manual, after-the-fact approaches?
Rogers: Everyone has to walk before they run with any type of new technology implementation. But, we are finding that most organizations are keying in on those services that are most important, and making sure that they are instrumented appropriately regarding the technologies that support management as being able to define what those thresholds are.
Being able to correlate those thresholds to real business needs and business value -- that’s one of the interesting things about what we were doing in a service level. We can start to associate the services that are most relevant and what there are going to have the most impact for.
We can make sure that information that is contained either in the payload or form the service itself as provided. So, you have that insight. I think that organizations are starting to realize that, in order to prove the value of the services, in order to prove that the value of having this level of coordination around management, they need to be able to make that association.
From an inventing point of view, what’s interesting is that there are a lot of parallel types of processing going on in this environment. Rather than wait until something happens in some linear, straight-through process, we're seeing the ability to watch and correlate some of those events vis-Ã -vis the thresholds, understand which thresholds are the most important, and start automating how to define the behavior, how the system is going to react to those conditions, and do it from a cost-benefit perspective in moving forward.
Gardner: Okay, so companies can take this approach, use a moderate pace, learn as they go, and use complex event processing to offer insights into the context of what’s going on. But, if human nature is any indication, people usually react to whatever the rules are about their job, and for IT this is going to be the view from the SLAs.
It strikes me that what’s going to happen is a lot of these organizations are going to reverse-engineer from the SLA, and that the rules and the models in the SLA become extremely important. Am I going out on the limb here, Anthony, or do you think that it will pan out that the SLAs will be the rules that the service performance management then needs to line up around?
Abbattista: That’s right. Again, it's back to what do you expect, and are you living up to it? You talk about failure not being from the SOA, but we could have a case where a service got deployed, people learned about it, and, before you know it, we are taking 100,000 hits a day on a service that no body ever gave any design thought to.
I would have to reach in there and get some agent information once in a while. And all the sudden, the supplier of the service, who did us a favor, put this on the bus, and then did a point-to-point interface, calls up and says, "Help!"
Someone might publish this thing and it had no modeling, because they thought it was some low-volume thing and it wasn’t important. All of a sudden, it becomes important because everybody found it. So, as we get to composite services, SOA performance is about the service expectation.
Gardner: And the governance?
Abbattista: The governance, and do you let them do it? Do you have governors? Do you have a cost model that burdens the caller, rather than the supplier? These are real questions we'll get into, and they are why I was talking about breaking down the walls. If it's truly a valuable service, then it's my job to figure out and pay for upgrades -- or to help you redesign it.
We take very much an advocate approach to, "Okay, if you come on the bus, we will help you with being successful." And the SLA is the baseline for that. But it also sets up that, "Hey, did you do a good enough job? And what if you are wildly successful?"
Gardner: Right. Let's throw this back to TIBCO. There's clearly a need in the market for a full lifecycle approach, feedback loops, many moving parts. What is it that you can do from a product perspective that helps get to that level of automation? That, in a sense, fills in the cracks about whom and what performs some of these necessary communications between the operations side and those associated with the ongoing requirements?
McNamara: Taking a step back, TIBCO offers a single user interface from the business analyst all the way through to the operational administrators who are running our applications. The idea is that, when you sit down to build out your services, when you sit down to build out your business processes, you use one tool to define what the business processes look like, what the touch points are between folks. Then, that diagram gets handed off from the business analyst to the implementer, who sits down and actually builds the services or builds the business process management (BPM) process that meets those requirements.
There is a direct link between the two. There is a round-tripping built into that tool, largely because it's a single data model and a single user interface with different views for people with different roles in your enterprise. That’s one major thing we do to help facilitate that communication, and that’s part of what we call the TIBCO ONE initiative. The product in question is the TIBCO Business Studio product, which forms that single user interface.
Gardner: And you’ve got hooks in a lot of the other parts of the SOA infrastructure for service enablement and delivery. How do you pull these parts together in a concerted effort?
McNamara: The other side of things is, even once you’ve built things out and deployed things to production, you need to make sure you can keep track, because a number of the folks on this panel have said exactly what’s going on. Ideally, you want to identify early on, as Sandy and Anthony said, which services are important to your enterprise and which services will have heavy load.
Unfortunately, you can’t always do that. Sometimes a little service, as Anthony said, where you think it's just helpful turns out to be a service used in 60 percent of the applications you are deploying. All of a sudden, you’ve got an issue.
You need to understand what the usage characteristics are on your services, not just the designed usage characteristics on your services. We’ve embedded both policy and performance management capabilities in our underlying service infrastructure. All the TIBCO ActiveMatrix products, all of our SOA enablement products, will transparently monitor for performance and usage of the services deployed in that environment.
Anything that you build in TIBCO ActiveMatrix BusinessWorks, ActiveMatrix Service Bus, ActiveMatrix Service Grid, and so on, is automatically monitored. And, you can automatically do some things around policy and access and control and rules.
So, even if you build that little service and you don’t think it's important, and you don’t want to go to the extra trouble to build some governance into it, it's there. It's already been embedded in that infrastructure. When you need it, you can just turn it on and make use of it, and you will automatically have some information about how people are using it with a fairly nice visual dashboard?
The key here is not just the ability to see some numbers in a report, because people miss that. You can have a report on, as Anthony said, more than 750 services running. By going through the performance numbers on each of those services on a regular basis, things get lost if it's just numbers. You need to have very good visualization tool, so you can see in "living color" what's going on with those services and how that relates to the SLAs and the rules you’ve set -- the expectations you’ve set for those services.
Gardner: All right, let's go back to Allstate. Anthony you’ve heard the announcements today, you’ve understood this vision, and you understand the need very well. Do you think that we are getting very close to realizing more of an automated approach to service performance management in an SOA environment?
Abbattista: Yes, we are getting closer and making rapid strides. We need to be careful though. We are being careful to manage the service deployment, the service bus grid, and the parts about how to operate it. What makes me a little nervous or restless is the idea that we start taking all that back into the system parameters and the Java environments and Oracle databases, and that sort of thing. I would hate to see us not solve this first.
I really don’t think we’re at a stage where I want to automatically be adjusting heap sizes in Java virtual machines, or Oracle database parameters, which could be a next logical extension. I did see a little twinkle in people's eyes today, when they looked at products like the BMC Suite and Matrix. I don’t know that I want to have system programmer types around, trying to debug the debugging environment. I think it could become very complicated, very quickly.
Gardner: So we need to keep this at that higher abstraction in order to appreciate the whole and not get down into the weeds?
Abbattista: That’s my belief. I would say that if this service is not performing, then maybe we get the three people on the phone, the database administrator, the platform person, and the network person -- and we take a look at it. But I don’t think we should drill too far into that, until we solve the other layer.
Gardner: I suppose the good news and bad news about all of this is that the metrics for success or failure will be quite evident. You are not going to be able to cover this up across a service-support environment and the business processes that those contribute to, if it doesn’t work. Any failures are going to be readily apparent, not just to a systems administrator, but also to the entire organization that’s affected.
Joe, let's go to you on this whole notion of metrics of success. We have seen some caution, but we also see great promise around SOA. If we got into an economic environment where the pressure becomes higher for better productivity -- of doing more with less -- it's likely we are going to see more companies look to virtualization, outsourced services, software as a service. When do you think the switch on wider SOA use will get thrown, and to what degree does service performance management contribute to that?
McKendrick: Wow, that’s the $64-billion question. It's interesting, I was speaking with the enterprise architect for a major distribution company a little bit earlier. She pointed out to me that, when they started out their service enablement years ago, even before Web services came on the scene and evolved over the past 10 years, they built their infrastructure to be service-enabled from the get-go.
There was no effort to identify what can be service enabled and try to build a service around and try to get acceptance of it. And I asked her, "Well, what do you consider to be success in terms of adoption of the SOA, and in terms of reuse -- and do you even measure a reuse success?"
Basically, to that company, if a service gets reused, fine. If it doesn’t get reused at all, that’s fine too. It doesn’t matter. The reason I'm bringing that up is because reuse is often brought up as the ultimate metric for a SOA success, as the most tangible metric, I should say. But, I think the best approach is to design applications or pieces of applications from the initial start to be service-enabled and employing the latest standards.
Gardner: Okay, so the risks are high, the rewards are high. It sounds like we are getting closer to a less manual, more automated approach, something that has visibility and hooks up and down, deep and wide. Let's wrap up with some last thoughts on this subject.
Sandy, if you are a CIO, a decision maker in the enterprise, and you are listening to this, what do you think that you want to hear that’s going to make you confident, given that you’ve already made a lot of investments in services-enablement? You have to recognize that this is the way for the future, but what are you going to want to put in place in order to start protecting yourself when it comes to your performance management?
Rogers: What we are seeing with IT executives today is a real interest in leveraging what you have, of being able to have speed for deployment, not having to worry about all of the issues, and to have people on board that understand all of the technical dynamics of how everything needs to be implemented from an infrastructure point of view.
So, they need to be able to support fast time to market, and not worry about throwing something out there. When you are deploying it, you have to step back and make sure all of the resources that you need are lined up to make that happen. You want to have an automated way to handle deployment, to handle governance, to handle all of these different issues.
There is also the self-service nature that’s starting to happen -- the ability to create services and allow anyone in the enterprise to be able to get at the information they need as quickly as possible, not have to have a whole army of developers out there. That means you need to feel comfortable that you are creating an infrastructure that could be consumed by multiple parties.
Setting up that infrastructure is really going to save cost. It's going to save time to market, and you need that level of assurance, so that you don’t need to baby sit every single service. There is also an issue of being able to outsource to different parties. You want to be able to leverage that, cost-effectively.
You need to set up an infrastructure, all the processes and rules that everyone needs to follow. And by doing that, you can now leverage whatever resources you want to develop and create what's necessary, and not have to worry about everyone falling in line and having their own infrastructure and having all of that reference architecture put together at each different resource.
It’s really that whole concept of creating a centralized type of platform and a framework to consume all these services. It’s going to be very, very important going forward. Everyone is talking about the issues of the economy, and it’s really the trade-offs of what do you need to do in order to move forward and think about things in more of a total cost of ownership (TCO) manner versus that of direct return on investment (ROI) -- that immediate cost-per-service type of measurement.
Gardner: It sounds like we are describing what could be thought of as insurance. You’ve already gone on the journey of SOA. It’s like going on a plane ride. Are you going to spend the extra few dollars and get insurance? And wouldn't you want to do that before you get into the plane, rather than afterward? Is that how you look at this? Is service performance management insurance for SOA? I am throwing that out to Anthony at Allstate.
Abbattista: It’s interesting to think of it as insurance. I think it’s a necessary operational device, for lack of better words.
Gardner: Service performance management -- not an option?
Abbattista: I don’t think it’s an option, because what will hurt if you fall down has been proven over and over again. As the guy who has to run an SOA now that's on insurance -- it’s not an option not to do it.
Gardner: Last words from you, Rourke? Do you view this as an insurance policy? I guess you have the choice of different insurers, right?
McNamara: I do. I actually do look at service performance management as insurance -- but along the lines of medical insurance. Anthony said people fall down and people get hurt. You want to have medical insurance. It shouldn't be something that is optional. It shouldn't be something you consider optional.
It’s something that you need to have, and something that people should look at from the beginning when they go on this SOA journey. But it is insurance, Dana. That’s exactly what it does. It prevents you from running into problems. You could theoretically go down this SOA path, build out your services, deploy them, and just get lucky. Nothing will ever happen. But how many go through life without ever needing to see a doctor?
Gardner: Okay, now we are going to take some questions from the audience.
Tony Baer: This is Tony Baer with OnStrategies. I want to seize on something that Anthony Abbattista from Allstate had mentioned before, which is that you hope that service performance management doesn’t degrade into getting down to "Java heap sizes." I surely don’t blame you on that one, but what I am wondering is, at what point does this become an IT service management issue?
Abbattista: Because we have gathered that responsibility together, I guess it all falls under one roof in our particular organization. I would think it was external services. One thing we are doing is measuring some of our external providers outside the organization. I guess it’s sort of the same phone call. You are calling yourself or you are calling the person who is responsible and holding him accountable. So, I don’t know it changes much.
McNamara: I would like to add something to that. With something like a Tivoli or a BMC solution, something like a business service management technology, your operational administrators are monitoring your infrastructure.
They are monitoring the application at the application layer and they understand, based on those things, when some thing is wrong. The problem is that’s the wrong level of granularity to automatically fix problems. And it’s the wrong level of granularity to know where to point that finger, to know whom to call to resolve the problem.
It’s right, if what's wrong is a piece of your infrastructure or an entire application. But if it’s a service that’s causing the problem, you need to understand which service -- and those products and that sort of technology won’t do that for you. So, the level of granularity required is at the service level. That’s really where you need to look.
Rogers: What I find is that it’s inevitable that we are going to go down that path, but standards between the systems that do IT management traditionally and this level of detail really haven’t been fleshed out. Most organizations are looking for a single, unified type of dashboard on some of the key indicators. They might want to have that for the operations team that has traditionally run IT service management.
A lot of the initiatives around ITIL Version 3.0 are starting to get some of those teams thinking in terms of how to associate the business requirements for how services are being supported by the infrastructure, and how they are supported by the utility of the team itself. But, we're a long way away from having everything all lined up, and then having it automatically amend itself. People are very nervous about relinquishing control to an automated system.
So, it is going to be step-by-step, and the first step is getting that familiarity, getting those integrations starting to happen and then starting to let loose. What's interesting is in some of the areas of virtualization technologies, where you might have some level of management that’s abstracted from the physical infrastructure, and then you have this level of abstracted management of services how they come together. It hasn't really been defined in the industry, but down the road -- two, three, four, five years from now -- I think you will be seeing a lot more around that.
McKendrick: Let me add that we're still in the very early stages of SOA. In fact, a lot of companies out there think they have SOA, when they actually have just the bunch of Web services, JBoss architecture, and point-to-point types of interfaces and implementations. A lot of companies are just starting to get their arms around exactly what SOA is and what it isn't.
Gardner: Very good. We have been discussing the issues around service performance management for SOA environments. We are talking with a panel of industry analysts and practitioners. I want to thank our panelists, Joe McKendrick, Sandy Rogers, Anthony Abbattista, and Rourke McNamara. Thanks.
This is Dana Gardner, principal analyst at Interarbor Solutions, and you have been listening to a sponsored BriefingsDirect podcast. Thanks and come back next time.
Listen to the podcast here. Sponsor: TIBCO Software.
Transcript of BriefingsDirect podcast on service performance management recorded live at TUCON 2008 in San Francisco on April 30, 2008. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Thursday, June 05, 2008
Apache CXF: Where it's Been and What the Future Holds for Web Services Frameworks
Transcript of BriefingsDirect podcast on IONA Apache CXF and open-source Web services frameworks.
Listen to the podcast. Sponsor: IONA Technologies
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Today, a sponsored podcast discussion about Apache CXF, an open-source Web services framework that recently emerged from incubation into a full project. We are going to be discussing where CXF is, what are the next steps, how it is being used, what the market is accepting from open-source Web services and service-oriented architecture (SOA) infrastructure, and then, lastly, a road map of where CXF might be headed next.
Joining us to help us understand more about CXF, is Dan Kulp, a principal engineer who has been deeply involved with CXF for a number of years. He works at IONA Technologies. Welcome back to the show, Dan.
Dan Kulp: Thank you, it's good to be here.
Gardner: We are also joined by Raven Zachary, the open-source research director at The 451 Group. Welcome to the show, Raven.
Raven Zachary: Thank you.
Gardner: And we are joined by Benson Margulies, the CTO of Basis Technology. Welcome, Benson.
Benson Margulies: Thank you, good day.
Gardner: Let's start with you, Benson. Tell us a little bit about Basis Technology. I want to hear more about your company, because I understand you are a CXF user.
Margulies: Basis is about a 50-person company in what we call linguistic technologies. We build software components that do things like make high-quality, full-text search possible in languages such as Arabic and Chinese -- or do things like tag names and text, which is part of information retrieval.
We have customers in the commercial and government spaces and we wound up getting interested in CXF for two different reasons. One is that some of our customers have been asking us over time to provide some of our components for integration into a SOA, rather than through a direct application programming interface (API), or some sort of chewing gum and baling wire approach. So, we were looking for a friendly framework for this purpose, and CXF proved to be such.
The other reason is that, for our own internal purposes, we had developed a code generator that could read a Web-service description file WSDL and produce a client for that in JavaScript that could be loaded into a browser and tied back to a Web service. Having built it, we suddenly felt that we would like some help maintaining it. We went looking for an open-source framework to which we could contribute it, and CXF proved to be a friendly place for that too.
Over a period of time, to make a long story short, I wound up as a CXF committer. So, Basis is now both a corporate user of CXF as a delivery vehicle for our product, and also I am a committer focused on this JavaScript stuff.
Gardner: Great. You used the word "friendly" a couple of times. Let's go to Raven Zachary. Raven, why do people who go to open-source code and projects view it as friendly? What's this "friendly" business?
Zachary: Well, there are different motivations for participating in an open-source community. Generally, when you look at why businesses participate, they have a common problem among a peer set. It could be an underlying technology that they don't consider strategic. There are benefits and strength in numbers here, where companies pool together resources to work together on a common problem.
I think that for individual developers, they see it as a chance to do creative problem-solving in off hours, being involved in the team project. Maybe they want to build up their current opportunities of expertise in another area.
In the case of a CXF, it certainly has been driven heavily by IONA and its acquisition of LogicBlaze, but you had other individuals and companies involved -- Red Hat, BEA, folks from Amazon and IBM, and Benson from Basis, who is here talking about his participation. The value of this opportunity for many different commercial entities is coming together to solve a common set of problems.
Gardner: Let's go to Dan Kulp. Dan, tell us a little bit about CXF and its current iteration. You emerged from incubation not that long ago. Why don't you give our listeners, for those who are not familiar with it, a little bit of the lineage, the history of how CXF came together, and a little bit about the current state of affairs in terms of its Apache condition or position?
Kulp: CXF was basically a merger of the Celtix project that we had at ObjectWeb, which was IONA sponsored. We had lot of IONA engineers producing a framework there. There was also the XFire Project that was at Codehaus. Both of these projects were thinking about doing a 2.0 version, and there was a lot of overlap between the two. So, there was a decision between the two communities to pool the resources and produce a better 2.0 version of both XFire and Celtix
As part of that whole process of merging the communities, we decided to take it to Apache and work with the Apache communities as a well-respected open-source community.
So that's the long-term history of CXF. We spent about 20 months in the incubator at Apache. The incubator is where all the new projects come in. There are a couple of main points there, and one is the legal vetting of the code. Apache has very strong requirements about making sure all of the code is properly licensed, but is compatible with the Apache license, that the people that are contributing it to have done all of the legal requirements to make sure that the code meets those things. That's to protect the users of the Apache projects, which, from a company and user standpoint, is very important.
A lot of other projects don't do that type of legal requirement. So, there are always iffy statements around that. That was one important thing. Another very important part of the Apache incubator is building the community. One of the things they like to make sure is that any project that goes out of the incubator is in a very diverse community.
There are people representing a wide range of companies with a wide range of requirements, and the idea is to make sure that that community is long-term stable. If one company should suddenly be acquired by another company, just goes bankrupt and out of business, or whatever, the community is going to still be there in a healthy state. This is so that you can know that that the Apache project is a long-term thing not a short term.
Gardner: Could I pause there, and could you tell us who are the major contributors involved with CXF at this point?
Kulp: IONA is still heavily involved, as is Basis Technology, a couple of IBMers, as was mentioned earlier, and a couple of Red Hat people. There is one person who is now working for eBay who is contributing things, and there are a few people who I don't even know what company they work for. And that's a good thing. I don't really need to know. They have a lot of very good ideas, they are doing a lot of great work, and that's what's important about the community. It's not really that important, as long as the people are there participating.
Gardner: Okay. Things move quickly in this business. I wonder if any of our panelists recognize any shifts in the marketplace that have changed what may have been the optimum requirement set for a fully open-source Web-services framework from, say, two or three years ago, when these projects came together. What has shifted in the market? Does anyone have some thoughts on that?
Margulies: Well, Dan and Glen, who was another one of our contributors, and I, were having a lunch today and we were discussing the shift in the direction from old JAX-RPC framework to JAX-WS/JAXB, the current generation of SOA standards. That has very much become the driving factor behind the kits.
CXF gets a lot of attention, because it is a full open-source framework, which is completely committed to those standards and gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for, as opposed to some particular theoretical idea than any of ours about what they ought to want to use it for.
Gardner: Thank you, Benson. Anyone else?
Kulp: Yes, one of the big things that comes to mind when this question comes up is, is the whole "code first" mentality. Several years ago, in order to do Web services, you had to know a lot of stuff about WSDL, extensible markup language (XML) schema. You had to know a lot of XMLisms. When you started talking about interop with other Web Services stacks, it was really a big deal, because these toolkits exposed all of this raw stuff to you.
Apache CXF has is a fairly different approach of making the code-first aspect a primary thing that you can think about. So, a lot of these more junior level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.
Gardner: Now, SOA is a concept, a methodology, and an approach to computing, but there are a number of different infrastructure components that come together in various flexible ways, depending on the end user's concepts and direction. Tell us a little bit about how CXF fits into this, Dan, within other SOA infrastructure projects, like ServiceMix, Camel, ActiveMQ. Give us the larger SOA view, the role CXF plays in that. Then, I am going to ask you how that relates to IONA and FUSE?
Kulp: Obviously, nowadays, if you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.
A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs.
Gardner: Alright, so it fits into other Apache projects and code infrastructure bases, but as you say "plug-in-able," this probably makes it quite relevant and interesting for a lot of other users where Web-services stack is required. Can you name a couple of likely scenarios for that?
Kulp: It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. Obviously, with open-source people, anybody can just kind of grab it and start using it without really telling you, "Hey, I'm using this," until suddenly they come to you one day saying, "Hey, isn't this neat?"
One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.
They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.
Margulies: I should point out that there has been a tendency in some of the older Web-service platforms to make the platform into a rather heavy monolithic item. There's a presumption that what you do for a living with a Web service is stand up a service on the Web in one place. One of CXF's advantages is what you want to do is deliver to some third party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is un-intrusive in that regard.
Gardner: And, just as a level-set reality check, over to Raven. Tell me a little bit about how this mix-and-match thing is working among and between the third parties, but also among and between commercial and open source, the infrastructure components.
Zachary: The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.
When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.
Gardner: Now, let's juxtapose this to the FUSE set. This is a commercially supported, certified, and tested SOA and Web-services component set. The FUSE services framework is derived from CXF. Dan, tell us a little bit about what is going on with FUSE and how has that now benefited from CXF moving from incubation into full Apache?
Kulp: As you mentioned, the FUSE services framework is basically a re-branded version of Apache CXF. If you go into a lot of these big customers, like banks or any of the major type of customers, and they deploy an application, they want to have some level of support agreement with somebody that says if a bug is found or a problem crops up, can they get somebody on the phone and get a bug fixed relatively quickly.
That's what the FUSE product line is basically all about. It's all open-source, and anybody can download and use the stuff, but you may not get the same level of support from the Apache community, as you do with the FUSE product.
The Apache communities are pretty much all volunteer-type people. Pretty much everybody is working on whatever their own agenda is, but they have their own expertise. So, they may not even have time, and they may be out on leave or on vacation or something like that. Getting the commercial-level of support from the Apache community can sometimes be a hard sell for a lot of these corporations, and that's why what FUSE really brings is a support agreement. You know that there is somebody there to call when there is a problem.
It's a two-way relationship. Obviously, if any of those customers come back with bugs and stuff, the IONA people will fix them and get them pushed into both Apache and FUSE. So, the bugs and stuff get fixed, but the other thing that IONA gets from this is that there's a lot of ideas in the Apache communities that we may not have thought of ourselves.
One good example of this is that JavaScript thing that Benson mentioned earlier. That's not something IONA really would have thought of at the beginning, but this is something that we can give back to our customers saying, "Hey, isn't this a neat idea?" So, there are a lot of benefits coming from the other people that aren't IONA in these communities actually providing new features and new ideas for the IONA customers.
Gardner: Okay, you came off incubation in April, is that correct?
Kulp: Yes.
Gardner: Tell us about what's going on now. What's the next step, now that it's out of that. Is this sort of a maintenance period, and when will we start to think about seeing additional requirements and functionality coming in?
Kulp: There are two parts to that question. Raven and I graduated, and we were ready to push out 2.1. Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward, compared to the 2.0 version that was ready last August, I believe.
Right now, there are basically two tracks of stuff going on. There are obviously a lot of bug fixes. One of the things about graduating is that there are a lot of people who don't really understand what the incubator is about, and so they weren't looking in the incubator.
The incubator has nothing to do with the quality of the code. It has more to do with the state of the community, but people see the word "incubator" and just say, "No, I'm not going to touch that." But, now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.
Gardner: I am impressed that you say "bug fixes" and not "refinement." That's very upfront of you.
Kulp: Well, a lot of it is refinement too, and, to be honest, there is a bit of documentation refinement that is going on as well, because with new people using it, there are new expectations. Their old toolkits may have done things one way, and the documentation may not reflect well enough, "Okay, if you did it this way in the old toolkit, this is how you do the same thing in CXF."
Margulies: If I could pipe up with a sociological issue here with open source which says, it's a lot easier to motivate someone to run in and diagnose a defect or a missing feature in the code and make the fix than to get the additional motivation to go over to the "doc" side and think through, "How the heck are we going to explain this, and who needs to have it explained to them." We're really lucky, in fact. We have at least one person in the community who almost entirely focuses on improving the doc as opposed to the code.
Gardner: Okay. So, we're into this maturity move. We've got a lot more people poking at it and using it. We're going to benefit from that community involvement. We've mentioned a couple of things that struck me a little earlier -- the Groovy experience and JavaScript. I guess there's this perception by many whom I've talked to that Web services is interesting, but there's a certain interest level too in moving into more dynamic languages, the use of RESTful for getting out to clients, and thinking about Web services in a broader context.
So, first let's go to Benson. Tell us why this JavaScript element was important to you and where you think the kind of mindset is in the field around Web services and traditional Web services-star specifications and standards?
Margulies: We went here originally, because while we built these components to go into the middle of things, we have to show them off to people, who just want to see the naked functionality. So, we built a certain amount of demo functionality as Web applications, with things from Web pages. And, the whole staff was saying, "Oh gosh, first we have to write a JSP page, and then we have to write some Beans, and then we have to package it all up, and then we have to deploy it."
It got really tiresome. So we went looking for a much thinner technology for taking our core functionality and making it visible. It dawned on us that perhaps you could just call a Web service from a browser.
Historically, there's been such a mentality in the broad community because you "couldn't possibly do that." "Those Web service, XML messages, they are so complicated." "Oh, we could never do that." And, several of the dynamic languages, SOAP, or Web-service kits that have shown up from time to time in the community were really weak. They barely worked, because they're very old versions of the Web-service universe. As Web-service standards have moved into stronger XML, they got left behind.
So, not knowing any better, we went ahead and built a code generator for JavaScript that could actually talk to a JAX-WS Web service, and I think that's an important direction for things to go. REST is a great thing. It allows very simple clients to get some data in and out of Web services, but, people are building really big complicated applications and dynamic languages these days, things like Ruby. For Web services to succeed in that environment, we need more of what we just did with the JavaScript. We need first class citizenship for dynamic languages as clients and even servers of Web services.
Gardner: Let's take it over to Raven. Tell us, from the analyst perspective, what you see going on mentality wise and mindshare wise with Web-services specs, and do you think that there's a sort of "match made in heaven" here between something like CXF and some of these dynamic languages?
Zachary: Well, looking back on the history of CXF being the merging of two initiatives -- Celtix from IONA and XFire from Codehaus -- and spending last few years in the incubator, and now coming out of the incubator in April, bringing together those two initiatives is very telling in terms of the stronger initiative, based on the basis of two existing open-source initiatives.
I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.
I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.
Gardner: Dan, you've probably heard this. Some of the folks who are doing more development with dynamic languages and who are trying to move toward light-weight webby applications have kind of an attitude going on with Web-services specs. Have you noticed that and what do you think is up with that? Has that perhaps prevented some of them from looking at CXF in evaluating it?
Kulp: Yeah, in a way, it has prevented them, but Web Services are pretty much everywhere now. So, even though they may not really agree with some of Web-service ideas, for their own user base to be able to grow, they have to start thinking about how do we solve that problem, because the fact is that they are there.
Now, going forward, REST is obviously a big word. So, whatever toolkit you're looking at you need to be able to talk REST as well, and CXF is doing a little bit there. If you go back, there's CORBA stuff that needs to be talked to. With CXF, you don't just get the SOAP part of SOA, you get some of these additional technologies that can help you solve a wider range of problems. That's very important to certain people, especially if you're trying to grow a user base.
Gardner: Alright, so you've obviously benefited, the community has benefited from Benson and Basis Technology offering in what they did with JavaScript. I assume you'll be interested in committers to further that across more languages and more technologies?
Kulp: Oh, definitely. One of the nicest things about working in Apache projects is that it's an ongoing effort to try to keep the community growing and getting new ideas. As you get more people in, they have different viewpoints, different experiences, and all that can contribute to producing new ideas and new technologies, and making it easier to solve a different set of problems.
I always encourage people that, if they're looking in the CXF code, and they hit a bug, it's great if we see them submit a patch for that, because that shows that they're actually digging in there. Eventually, they may say, "Okay, I kind of like how you did that, but wouldn't it be neat if you could just do this?" And then maybe they submit some ideas around that and become a committer. It's always a great thing to see that go forward.
Gardner: Let's go around the table one last time and try to predict the future when it comes to open-source Apache projects, this webby application environment, and the larger undertaking of SOA. Dan, any prophecies about what we might expect in the CXF community over, say, the next 12 months?
Kulp: Obviously, there's going to be this ongoing track of refinements and fixes. One of nice things of the CXF community is that we're very committed to supporting our existing users and making sure that any bug fixes or bugs that they encounter get fixed in a relatively timely manner. CFX has a very good history of doing very frequent patch releases to get fixes out there. So, that's an ongoing thing that should remain in place and it's a benefit to the communities and to the users.
Beyond that, there's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.
There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.
There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported. As customers or users start demanding other WS technologies, we'll start considering them, as well. Obviously, if new people come along, they'll have other great ideas, and we would welcome those as well.
Gardner: Alright. Raven Zachary, what do you see as some of the trends that we should expect in Open Source infrastructure particularly around SOA and Web services interoperability over, say, the next 12 months?
Zachary: We've had for the last decade or so a number of very successful open-source infrastructure initiatives. Certainly, Apache Web Server Linux as an operating system and the application middleware stack -- Tomcat, Geronimo, JBoss -- have done very well. Open source has been a great opportunity for these technologies to advance, and we're still going to see commercial innovation in the space. But, I think the majority of the software infrastructure will be based on open standards and open source over time and then you'll see commercialization occur around the services side for that.
We're just starting to see the emergence of open-source Web services to a large extent and I think you're going to see projects coming out of the Apache Software Foundation leading that charge as other areas of the software infrastructure have been filled out.
When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.
So, watch this space. I think this technology and other technologies like it, have a very bright future.
Gardner: I was impressed and I wrote a blog recently about this emerging from incubation. I got some really high numbers, which indicated some significant interest.
Last, I am going to Benson at Basis Technology as a user and a committer. How do you expect that you'll be using something like CXF in your implementations over the next 12 months?
Margulies: Well, we're looking at a particular problem, which is coming up with a high-performance Web-service interface to some of our functions, where you put a document and you get some results out. That's quite challenging, because documents are sort of heavyweight, large objects, and the toolkits have not been wildly helpful on this.
So, I've scratched some of the necessary services on CXF and I expect to be digging deeper. The other thing I put in as a comment as a committer is that one of the most important things we're going to see is a user support community.
Long before you get to the point where someone is a possible committer on the program, there is the fact that the users help each other in using the product and using the package, and that's a critical success factor. That community of people who read the mailing list just pitch in and help those newbies find their way from one end to the other.
Gardner: Well, great. Thank you so much. I think we've caught up with CXF, and have quite a bit to look forward to over the coming quarters and months. I want to thank our panel. We've been joined by Dan Kulp, principal engineer at IONA Technologies; Raven Zachary, open source research director for The 451 Group; and Benson Margulies, the CTO at Basis Technology. Thank, everyone.
Kulp: You're very welcome.
Zachary: Thank you.
Margulies: Thank you.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast on Apache CXF. Thanks and come back next time.
Listen to the podcast. Sponsor: IONA Technologies.
Transcript of BriefingsDirect podcast on IONA Apache CXF and open-source frameworks. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Listen to the podcast. Sponsor: IONA Technologies
Dana Gardner: Hi, this is Dana Gardner, principal analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.
Today, a sponsored podcast discussion about Apache CXF, an open-source Web services framework that recently emerged from incubation into a full project. We are going to be discussing where CXF is, what are the next steps, how it is being used, what the market is accepting from open-source Web services and service-oriented architecture (SOA) infrastructure, and then, lastly, a road map of where CXF might be headed next.
Joining us to help us understand more about CXF, is Dan Kulp, a principal engineer who has been deeply involved with CXF for a number of years. He works at IONA Technologies. Welcome back to the show, Dan.
Dan Kulp: Thank you, it's good to be here.
Gardner: We are also joined by Raven Zachary, the open-source research director at The 451 Group. Welcome to the show, Raven.
Raven Zachary: Thank you.
Gardner: And we are joined by Benson Margulies, the CTO of Basis Technology. Welcome, Benson.
Benson Margulies: Thank you, good day.
Gardner: Let's start with you, Benson. Tell us a little bit about Basis Technology. I want to hear more about your company, because I understand you are a CXF user.
Margulies: Basis is about a 50-person company in what we call linguistic technologies. We build software components that do things like make high-quality, full-text search possible in languages such as Arabic and Chinese -- or do things like tag names and text, which is part of information retrieval.
We have customers in the commercial and government spaces and we wound up getting interested in CXF for two different reasons. One is that some of our customers have been asking us over time to provide some of our components for integration into a SOA, rather than through a direct application programming interface (API), or some sort of chewing gum and baling wire approach. So, we were looking for a friendly framework for this purpose, and CXF proved to be such.
The other reason is that, for our own internal purposes, we had developed a code generator that could read a Web-service description file WSDL and produce a client for that in JavaScript that could be loaded into a browser and tied back to a Web service. Having built it, we suddenly felt that we would like some help maintaining it. We went looking for an open-source framework to which we could contribute it, and CXF proved to be a friendly place for that too.
Over a period of time, to make a long story short, I wound up as a CXF committer. So, Basis is now both a corporate user of CXF as a delivery vehicle for our product, and also I am a committer focused on this JavaScript stuff.
Gardner: Great. You used the word "friendly" a couple of times. Let's go to Raven Zachary. Raven, why do people who go to open-source code and projects view it as friendly? What's this "friendly" business?
Zachary: Well, there are different motivations for participating in an open-source community. Generally, when you look at why businesses participate, they have a common problem among a peer set. It could be an underlying technology that they don't consider strategic. There are benefits and strength in numbers here, where companies pool together resources to work together on a common problem.
I think that for individual developers, they see it as a chance to do creative problem-solving in off hours, being involved in the team project. Maybe they want to build up their current opportunities of expertise in another area.
In the case of a CXF, it certainly has been driven heavily by IONA and its acquisition of LogicBlaze, but you had other individuals and companies involved -- Red Hat, BEA, folks from Amazon and IBM, and Benson from Basis, who is here talking about his participation. The value of this opportunity for many different commercial entities is coming together to solve a common set of problems.
Gardner: Let's go to Dan Kulp. Dan, tell us a little bit about CXF and its current iteration. You emerged from incubation not that long ago. Why don't you give our listeners, for those who are not familiar with it, a little bit of the lineage, the history of how CXF came together, and a little bit about the current state of affairs in terms of its Apache condition or position?
Kulp: CXF was basically a merger of the Celtix project that we had at ObjectWeb, which was IONA sponsored. We had lot of IONA engineers producing a framework there. There was also the XFire Project that was at Codehaus. Both of these projects were thinking about doing a 2.0 version, and there was a lot of overlap between the two. So, there was a decision between the two communities to pool the resources and produce a better 2.0 version of both XFire and Celtix
As part of that whole process of merging the communities, we decided to take it to Apache and work with the Apache communities as a well-respected open-source community.
So that's the long-term history of CXF. We spent about 20 months in the incubator at Apache. The incubator is where all the new projects come in. There are a couple of main points there, and one is the legal vetting of the code. Apache has very strong requirements about making sure all of the code is properly licensed, but is compatible with the Apache license, that the people that are contributing it to have done all of the legal requirements to make sure that the code meets those things. That's to protect the users of the Apache projects, which, from a company and user standpoint, is very important.
A lot of other projects don't do that type of legal requirement. So, there are always iffy statements around that. That was one important thing. Another very important part of the Apache incubator is building the community. One of the things they like to make sure is that any project that goes out of the incubator is in a very diverse community.
There are people representing a wide range of companies with a wide range of requirements, and the idea is to make sure that that community is long-term stable. If one company should suddenly be acquired by another company, just goes bankrupt and out of business, or whatever, the community is going to still be there in a healthy state. This is so that you can know that that the Apache project is a long-term thing not a short term.
Gardner: Could I pause there, and could you tell us who are the major contributors involved with CXF at this point?
Kulp: IONA is still heavily involved, as is Basis Technology, a couple of IBMers, as was mentioned earlier, and a couple of Red Hat people. There is one person who is now working for eBay who is contributing things, and there are a few people who I don't even know what company they work for. And that's a good thing. I don't really need to know. They have a lot of very good ideas, they are doing a lot of great work, and that's what's important about the community. It's not really that important, as long as the people are there participating.
Gardner: Okay. Things move quickly in this business. I wonder if any of our panelists recognize any shifts in the marketplace that have changed what may have been the optimum requirement set for a fully open-source Web-services framework from, say, two or three years ago, when these projects came together. What has shifted in the market? Does anyone have some thoughts on that?
Margulies: Well, Dan and Glen, who was another one of our contributors, and I, were having a lunch today and we were discussing the shift in the direction from old JAX-RPC framework to JAX-WS/JAXB, the current generation of SOA standards. That has very much become the driving factor behind the kits.
CXF gets a lot of attention, because it is a full open-source framework, which is completely committed to those standards and gives easy-to-use, relatively speaking, support for them and, as in many other areas, focuses on what the people in the outside world seem to want to use the kit for, as opposed to some particular theoretical idea than any of ours about what they ought to want to use it for.
Gardner: Thank you, Benson. Anyone else?
Kulp: Yes, one of the big things that comes to mind when this question comes up is, is the whole "code first" mentality. Several years ago, in order to do Web services, you had to know a lot of stuff about WSDL, extensible markup language (XML) schema. You had to know a lot of XMLisms. When you started talking about interop with other Web Services stacks, it was really a big deal, because these toolkits exposed all of this raw stuff to you.
Apache CXF has is a fairly different approach of making the code-first aspect a primary thing that you can think about. So, a lot of these more junior level developers can pick up and start working with Web services very quickly and very easily, without having to learn a lot of these more technical details.
Gardner: Now, SOA is a concept, a methodology, and an approach to computing, but there are a number of different infrastructure components that come together in various flexible ways, depending on the end user's concepts and direction. Tell us a little bit about how CXF fits into this, Dan, within other SOA infrastructure projects, like ServiceMix, Camel, ActiveMQ. Give us the larger SOA view, the role CXF plays in that. Then, I am going to ask you how that relates to IONA and FUSE?
Kulp: Obviously, nowadays, if you are doing any type of SOA stuff, you really need some sort of Web-service stack. There are applications written for ServiceMix and JBI that don't do any type of SOAP calls or anything like that, but those are becoming fewer and farther between. Part of what our Web services bring is the ability to go outside of your little container and talk to other services that are available, or even within your company or maybe with a business partner or something like that.
A lot of these projects, like Camel and ServiceMix, require some sort of Web-services stack, and they've basically come to CXF as a very easy-to-use and very embeddable service stack that they are using to meet their Web-services needs.
Gardner: Alright, so it fits into other Apache projects and code infrastructure bases, but as you say "plug-in-able," this probably makes it quite relevant and interesting for a lot of other users where Web-services stack is required. Can you name a couple of likely scenarios for that?
Kulp: It's actually kind of fascinating, and one of the neatest things about working in an open-source project is seeing where it pops up. Obviously, with open-source people, anybody can just kind of grab it and start using it without really telling you, "Hey, I'm using this," until suddenly they come to you one day saying, "Hey, isn't this neat?"
One of the examples of that is Groovy Web service. Groovy is another dynamic language built in Java that allows you to do dynamic things. I'm not a big Groovy user, but they actually had some requirements to be able to use Groovy to talk to some Web services, and they immediately started working with CXF.
They liked what they saw, and they hit a few bugs, which was expected, but they contributed back to CXF community. I kept getting bug reports from people, but was wondering what they were doing. It turns out that Groovy's Web-services stack is now based on CXF. That's type of thing is very fascinating from my standpoint, just to see that that type of stuff developed.
Margulies: I should point out that there has been a tendency in some of the older Web-service platforms to make the platform into a rather heavy monolithic item. There's a presumption that what you do for a living with a Web service is stand up a service on the Web in one place. One of CXF's advantages is what you want to do is deliver to some third party a stack that they put up containing your stuff that interacts with all of their existing stuff in a nice light-weight fashion. CXF is un-intrusive in that regard.
Gardner: And, just as a level-set reality check, over to Raven. Tell me a little bit about how this mix-and-match thing is working among and between the third parties, but also among and between commercial and open source, the infrastructure components.
Zachary: The whole Apache model is mix and match, when you are talking about not only a licensing scheme. The Apache license, is a little easier for commercial vendors to digest, modify, and add in, compared to the GPL, but also I think it's the inherent nature of the underlying infrastructure technologies.
When you deploy an application, especially using open source, it tends to be several dozen distinct components that are being deployed. This is especially true in Java apps, where you have a lot of components or frameworks that are bundled into an application. So, you would certainly see CXF being deployed alongside of other technologies to make that work. Things like ServiceMix or Camel, as you mentioned, ActiveMQ, Tomcat, certainly Apache Web Server, these sorts of technologies, are the instrument to which these services are exposed.
Gardner: Now, let's juxtapose this to the FUSE set. This is a commercially supported, certified, and tested SOA and Web-services component set. The FUSE services framework is derived from CXF. Dan, tell us a little bit about what is going on with FUSE and how has that now benefited from CXF moving from incubation into full Apache?
Kulp: As you mentioned, the FUSE services framework is basically a re-branded version of Apache CXF. If you go into a lot of these big customers, like banks or any of the major type of customers, and they deploy an application, they want to have some level of support agreement with somebody that says if a bug is found or a problem crops up, can they get somebody on the phone and get a bug fixed relatively quickly.
That's what the FUSE product line is basically all about. It's all open-source, and anybody can download and use the stuff, but you may not get the same level of support from the Apache community, as you do with the FUSE product.
The Apache communities are pretty much all volunteer-type people. Pretty much everybody is working on whatever their own agenda is, but they have their own expertise. So, they may not even have time, and they may be out on leave or on vacation or something like that. Getting the commercial-level of support from the Apache community can sometimes be a hard sell for a lot of these corporations, and that's why what FUSE really brings is a support agreement. You know that there is somebody there to call when there is a problem.
It's a two-way relationship. Obviously, if any of those customers come back with bugs and stuff, the IONA people will fix them and get them pushed into both Apache and FUSE. So, the bugs and stuff get fixed, but the other thing that IONA gets from this is that there's a lot of ideas in the Apache communities that we may not have thought of ourselves.
One good example of this is that JavaScript thing that Benson mentioned earlier. That's not something IONA really would have thought of at the beginning, but this is something that we can give back to our customers saying, "Hey, isn't this a neat idea?" So, there are a lot of benefits coming from the other people that aren't IONA in these communities actually providing new features and new ideas for the IONA customers.
Gardner: Okay, you came off incubation in April, is that correct?
Kulp: Yes.
Gardner: Tell us about what's going on now. What's the next step, now that it's out of that. Is this sort of a maintenance period, and when will we start to think about seeing additional requirements and functionality coming in?
Kulp: There are two parts to that question. Raven and I graduated, and we were ready to push out 2.1. Apache CXF 2.1 was released about a week after we graduated, and it brought forth a whole bunch of new functionality. The JavaScript was one of them. A whole new tooling was another thing, also CORBA binding, and there is a whole bunch of new stuff, some REST-based APIs. So, 2.1 was a major step forward, compared to the 2.0 version that was ready last August, I believe.
Right now, there are basically two tracks of stuff going on. There are obviously a lot of bug fixes. One of the things about graduating is that there are a lot of people who don't really understand what the incubator is about, and so they weren't looking in the incubator.
The incubator has nothing to do with the quality of the code. It has more to do with the state of the community, but people see the word "incubator" and just say, "No, I'm not going to touch that." But, now that they we're graduated, there are a lot more people looking at it, which is good. We're getting a lot more input from users. There are a lot of people submitting other ideas. So, there is a certain track of people just trying to get some bug fixes and getting some support in place for those other people.
Gardner: I am impressed that you say "bug fixes" and not "refinement." That's very upfront of you.
Kulp: Well, a lot of it is refinement too, and, to be honest, there is a bit of documentation refinement that is going on as well, because with new people using it, there are new expectations. Their old toolkits may have done things one way, and the documentation may not reflect well enough, "Okay, if you did it this way in the old toolkit, this is how you do the same thing in CXF."
Margulies: If I could pipe up with a sociological issue here with open source which says, it's a lot easier to motivate someone to run in and diagnose a defect or a missing feature in the code and make the fix than to get the additional motivation to go over to the "doc" side and think through, "How the heck are we going to explain this, and who needs to have it explained to them." We're really lucky, in fact. We have at least one person in the community who almost entirely focuses on improving the doc as opposed to the code.
Gardner: Okay. So, we're into this maturity move. We've got a lot more people poking at it and using it. We're going to benefit from that community involvement. We've mentioned a couple of things that struck me a little earlier -- the Groovy experience and JavaScript. I guess there's this perception by many whom I've talked to that Web services is interesting, but there's a certain interest level too in moving into more dynamic languages, the use of RESTful for getting out to clients, and thinking about Web services in a broader context.
So, first let's go to Benson. Tell us why this JavaScript element was important to you and where you think the kind of mindset is in the field around Web services and traditional Web services-star specifications and standards?
Margulies: We went here originally, because while we built these components to go into the middle of things, we have to show them off to people, who just want to see the naked functionality. So, we built a certain amount of demo functionality as Web applications, with things from Web pages. And, the whole staff was saying, "Oh gosh, first we have to write a JSP page, and then we have to write some Beans, and then we have to package it all up, and then we have to deploy it."
It got really tiresome. So we went looking for a much thinner technology for taking our core functionality and making it visible. It dawned on us that perhaps you could just call a Web service from a browser.
Historically, there's been such a mentality in the broad community because you "couldn't possibly do that." "Those Web service, XML messages, they are so complicated." "Oh, we could never do that." And, several of the dynamic languages, SOAP, or Web-service kits that have shown up from time to time in the community were really weak. They barely worked, because they're very old versions of the Web-service universe. As Web-service standards have moved into stronger XML, they got left behind.
So, not knowing any better, we went ahead and built a code generator for JavaScript that could actually talk to a JAX-WS Web service, and I think that's an important direction for things to go. REST is a great thing. It allows very simple clients to get some data in and out of Web services, but, people are building really big complicated applications and dynamic languages these days, things like Ruby. For Web services to succeed in that environment, we need more of what we just did with the JavaScript. We need first class citizenship for dynamic languages as clients and even servers of Web services.
Gardner: Let's take it over to Raven. Tell us, from the analyst perspective, what you see going on mentality wise and mindshare wise with Web-services specs, and do you think that there's a sort of "match made in heaven" here between something like CXF and some of these dynamic languages?
Zachary: Well, looking back on the history of CXF being the merging of two initiatives -- Celtix from IONA and XFire from Codehaus -- and spending last few years in the incubator, and now coming out of the incubator in April, bringing together those two initiatives is very telling in terms of the stronger initiative, based on the basis of two existing open-source initiatives.
I like the fact that in CXF they are looking at a variety of protocols. It's not just one implementation of Web services. There's SOAP, REST, CORBA, other technologies, and then a number of transports, not just HTTP. The fact is that when you talk to enterprises, there's not a one-size-fits-all implementation for Web services. You need to really look at services, exposing them through a variety of technologies.
I like that approach. It really matches the needs of a larger variety of enterprise organizations and just it's a specific technology implementation of Web services. I mean, that's the approach that you're going to see from open-source projects in the space. The ones that provide the greatest diversity of protocols and transports are going to do quite well.
Gardner: Dan, you've probably heard this. Some of the folks who are doing more development with dynamic languages and who are trying to move toward light-weight webby applications have kind of an attitude going on with Web-services specs. Have you noticed that and what do you think is up with that? Has that perhaps prevented some of them from looking at CXF in evaluating it?
Kulp: Yeah, in a way, it has prevented them, but Web Services are pretty much everywhere now. So, even though they may not really agree with some of Web-service ideas, for their own user base to be able to grow, they have to start thinking about how do we solve that problem, because the fact is that they are there.
Now, going forward, REST is obviously a big word. So, whatever toolkit you're looking at you need to be able to talk REST as well, and CXF is doing a little bit there. If you go back, there's CORBA stuff that needs to be talked to. With CXF, you don't just get the SOAP part of SOA, you get some of these additional technologies that can help you solve a wider range of problems. That's very important to certain people, especially if you're trying to grow a user base.
Gardner: Alright, so you've obviously benefited, the community has benefited from Benson and Basis Technology offering in what they did with JavaScript. I assume you'll be interested in committers to further that across more languages and more technologies?
Kulp: Oh, definitely. One of the nicest things about working in Apache projects is that it's an ongoing effort to try to keep the community growing and getting new ideas. As you get more people in, they have different viewpoints, different experiences, and all that can contribute to producing new ideas and new technologies, and making it easier to solve a different set of problems.
I always encourage people that, if they're looking in the CXF code, and they hit a bug, it's great if we see them submit a patch for that, because that shows that they're actually digging in there. Eventually, they may say, "Okay, I kind of like how you did that, but wouldn't it be neat if you could just do this?" And then maybe they submit some ideas around that and become a committer. It's always a great thing to see that go forward.
Gardner: Let's go around the table one last time and try to predict the future when it comes to open-source Apache projects, this webby application environment, and the larger undertaking of SOA. Dan, any prophecies about what we might expect in the CXF community over, say, the next 12 months?
Kulp: Obviously, there's going to be this ongoing track of refinements and fixes. One of nice things of the CXF community is that we're very committed to supporting our existing users and making sure that any bug fixes or bugs that they encounter get fixed in a relatively timely manner. CFX has a very good history of doing very frequent patch releases to get fixes out there. So, that's an ongoing thing that should remain in place and it's a benefit to the communities and to the users.
Beyond that, there's a whole bunch of other ideas that we're working on and fleshing out. The code first stuff that I mentioned earlier, we have a bunch of other ideas about how to make code-first even better.
There are certain tool kits that you kind of have to delve down into either configuration or WSDL documents to accomplish what you want. It would be nice if you could just embed some annotations on your code, or something like that, to accomplish some of that stuff. We're going to be moving some of those ideas forward.
There's also a whole bunch of Web-services standards such as WS-I and WS-SecureConversation that we don't support today, but we are going to be working on to make sure that they are supported. As customers or users start demanding other WS technologies, we'll start considering them, as well. Obviously, if new people come along, they'll have other great ideas, and we would welcome those as well.
Gardner: Alright. Raven Zachary, what do you see as some of the trends that we should expect in Open Source infrastructure particularly around SOA and Web services interoperability over, say, the next 12 months?
Zachary: We've had for the last decade or so a number of very successful open-source infrastructure initiatives. Certainly, Apache Web Server Linux as an operating system and the application middleware stack -- Tomcat, Geronimo, JBoss -- have done very well. Open source has been a great opportunity for these technologies to advance, and we're still going to see commercial innovation in the space. But, I think the majority of the software infrastructure will be based on open standards and open source over time and then you'll see commercialization occur around the services side for that.
We're just starting to see the emergence of open-source Web services to a large extent and I think you're going to see projects coming out of the Apache Software Foundation leading that charge as other areas of the software infrastructure have been filled out.
When you look at growth opportunities, back in 2001, JBoss app server was a single-digit market share, compared to the leading technologies at the time, WebSphere from IBM and WebLogic from BEA. In the course of four years, that technology went from single-digit market share to actually being the number one deployed Java app server in the market. I think it doesn't take much time for a technology like CXF to capture the market opportunity.
So, watch this space. I think this technology and other technologies like it, have a very bright future.
Gardner: I was impressed and I wrote a blog recently about this emerging from incubation. I got some really high numbers, which indicated some significant interest.
Last, I am going to Benson at Basis Technology as a user and a committer. How do you expect that you'll be using something like CXF in your implementations over the next 12 months?
Margulies: Well, we're looking at a particular problem, which is coming up with a high-performance Web-service interface to some of our functions, where you put a document and you get some results out. That's quite challenging, because documents are sort of heavyweight, large objects, and the toolkits have not been wildly helpful on this.
So, I've scratched some of the necessary services on CXF and I expect to be digging deeper. The other thing I put in as a comment as a committer is that one of the most important things we're going to see is a user support community.
Long before you get to the point where someone is a possible committer on the program, there is the fact that the users help each other in using the product and using the package, and that's a critical success factor. That community of people who read the mailing list just pitch in and help those newbies find their way from one end to the other.
Gardner: Well, great. Thank you so much. I think we've caught up with CXF, and have quite a bit to look forward to over the coming quarters and months. I want to thank our panel. We've been joined by Dan Kulp, principal engineer at IONA Technologies; Raven Zachary, open source research director for The 451 Group; and Benson Margulies, the CTO at Basis Technology. Thank, everyone.
Kulp: You're very welcome.
Zachary: Thank you.
Margulies: Thank you.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You have been listening to a sponsored BriefingsDirect Podcast on Apache CXF. Thanks and come back next time.
Listen to the podcast. Sponsor: IONA Technologies.
Transcript of BriefingsDirect podcast on IONA Apache CXF and open-source frameworks. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.
Labels:
Apache,
CXF,
Dana Gardner,
guerilla soa,
Interarbor,
IONA,
Kulp,
Margulies,
open source,
Raven Zachary,
Web services
Subscribe to:
Posts (Atom)