Transcript of BriefingsDirect podcast with iTKO's John Michelsen on SOA testing and virtualization market trends.Listen to the podcast.
Download the podcast.
Find it on iTunes/iPod.
Learn more. Sponsor:
iTKO.
Dana Gardner: Hi, this is
Dana Gardner, principal analyst at
Interarbor Solutions, and you’re listening to
BriefingsDirect. Today, a sponsored podcast discussion about integration, validation, and testing for
services-oriented architecture (SOA) and
middleware -- particularly for
business process management and to extend business processes more efficiently.
We’re going to be looking at how integration nowadays is really across multiple dimensions. We are talking about integrating technology, about various formats, and for extending frameworks, vendors, application sets, and specific application suites. There are also now
enterprise service buses (ESBs) that are creating multiple types of integration across services -- from different hosting locations and from different technologies.
Not only that, we’re also dealing with traditional
enterprise application integration (EAI) issues and middleware. And, of course, there’s more talk about
cloud computing and
software as a service (SaaS).
The whole notion of integration in the enterprise has exploded in terms of complexity -- but that puts more onus and importance on validation, testing and understanding what’s actually going on within these integration activities.
To help us understand more about integration, middleware and SOA validation and testing, we’re joined by
John Michelsen, chief architect and founder of
iTKO. Welcome to the show, John.
John Michelsen: Thanks, Dana, good to be here.
Gardner: We’ve talked several times in the past about the integration in SOA, and what’s been going on. How do you look at integration now among business applications and middleware? Is it, in fact, more onerous and complex than ever, and how would you characterize the current state of the market?
Michelsen: It really is, and it’s for a number of reasons. Most of us can surmise that, as soon as we look at it. We tend not to turn anything off. Existing systems don’t go away, and yet we bring in additional [IT] systems and new things all the time. We’re changing technologies, because we're always looking for the faster, cheaper, more effective way. That's great, and yet today, IT becomes legacy faster than before. In fact, you and I had a conversation a few weeks ago about that.
So, it gets more complex over time. And yet, to get real value out of IT you’ve got to think not from the perspective of these systems, but from the business’s processes, as they need to function. We have to do what you can, considering unreasonable gyrations in the systems, in order to make it reflect the way the business operates.
So there is a real mismatch here, and in order for us to accomplish value for the business, we’ve got to solve for it.
Gardner: Of course, at the same time, IT organizations are under pressure to reduce their complexity, reduce their maintenance and
total cost of ownership (TCO). They’re dealing with long-term activities such as
datacenter consolidation and application modernization. What is it that brings testing and validation into this mixture, in terms of end-to-end visibility?
Michelsen: Let’s say three or four systems are already interoperating in some way, and now you’ve become a part of a larger organization. You’ve merged into a large organization, or you’ve taken into your organization something you've acquired. You add another three or four end points, and now you’ve got this explosion of additional permutations. The interactions are so many that without good testing and validation, there’s just almost no hope of getting real visibility, and predictability out of these systems.
When things do fail, which unfortunately happens, you’ll have an extremely long recovery time without this test and validation capability, because knowing that something broke somewhere is the best you can do.
Gardner: I suppose we’re also looking now more at the lifecycle of these applications based on what’s going on at design time. Folks who are using agile development principles and faster iterations of development are throwing services up fairly quickly -- and then changing them on a fairly regular basis. That also throws a monkey wrench into how that impacts the rest of the services that are being integrated.
Michelsen: That’s right, and we’re doing that on purpose. We like the fact that we’re changing systems more frequently. We’re not doing that because we want chaos. We’re doing it because it’s helping the businesses get to market faster, achieving regulatory compliance faster, and all of those good things. We like the fact that we’re changing, and that we have more tightly componentized the architecture. We’re not changing huge applications, but we’re just changing pieces of applications -- all good things.
Yet, if my application is dependent upon your application, Dana, and you change it out from under me, your lifecycle impacts mine, and we have a “testable event,” even though I’m not in a test mode at the moment. What are we going to do about this? We've got to rethink the way that we do services lifecycles, we've got to rethink the way we do integration and deployment.
Gardner: There is, of course, a very high penalty if you don’t do this properly. If you don’t have that visibility, you lose agility, and the business outcomes suffer.
Michelsen: That’s right. And too often, we see customers where they’re in this dynamic of these highly interconnected systems. That frequency of change and the amount of failure that’s occurring because of those changes are actually having such a negative effect that they’re artificially reducing their pace of change -- which is, of course, not the goal for the business -- in order to try to accomplish some level of stability.
This means that we’ve gone through all this effort to provide this highly adaptable and agile platform and we’re doing all this work to get agile and integrated, but we have to then undo the benefit in order to accomplish stability.
Gardner: One of the basic principles of SOA is that you get benefit as a result of the “whole being greater than the sum of the parts,” but many of the parts come from specific vendors and/or open-source projects. They have management capabilities and insights built into them specifically. Yet when you rise up a bit more holistically, that’s where the issue comes in of how to get visibility across multiple systems.
Explain to us how you got started on this journey, and where your background and history comes in terms of addressing that higher abstraction of visibility.
Michelsen: Right, that’s a good point, because if the world were as simple as we wanted it to be, we could have one vendor produce that system that is completely self-contained, self-managed, very visible or very "monitorable," if you will. That’s great, but that becomes one box of the dozens on the white board. The challenge is that not every box comes from that same vendor.
So we end up in this challenge where we’ve got to get that same kind of visibility and monitoring management across all of the boxes. Yet that’s not something that you just buy and that you get out of the box.
This is exactly what pushed me into this phase throughout the 1990s. I had a company prior to founding this one that built mission-critical applications for lots of large companies, including some airlines and financial service companies; logistics, even database engines, and things like this.
The great thing was that I was able to put my little team together to build really cool stuff and deploy it really fast into an organization. They loved it. The challenge was that I was doing this in a very disruptive way to the rest of the IT organization. I'd come, bring in this new capability, and integrate it into the rest of the applications.
Well, in doing so, I’m actually causing this very same dynamic that we’re talking about now -- where all of a sudden my new thing, my new technology, integrated into a bunch of legacy, is causing disruption across all kinds of systems. We just didn’t have a sense for how to do this.
So I had to learn how to do this, how to transform these organizations into integration-based thinking, and put in test-and-validation best practices. That’s what caused us to end up building
what we now call LISA.
Gardner: Unfortunately, a lot of organizations, when they face that disruption, their first instinct is probably just to put up a wall and say, “Okay, let’s sequester or isolate this set of issues.” But that, of course, aborts this business process level of innovation and value.
Michelsen: Exactly, and here's a classic example. A number of the types of systems that we built in the late 1990s were the e-commerce applications that were customer facing. The companies said, “I just don’t want to hear that this system can’t talk to that system. I want a Web-based presence that’s brain-dead simple, and that does things the way a customer wants to be able to do them. You’re going to interconnect all those back ends in order to get that to work. … You just do it for me. And if you won’t do it, I’m going to go find a vendor outside that will.”
The challenge is, no matter how it ends up there, now we've got to reckon with it. Frankly, even though those are sometimes difficult conversations the business is having with IT, the business needs those things, because the company that does it gains market share and increases the scope of their growth cycle. That obviously is something that every IT organization wants, because that leads to a bigger budget and a better company, and the success that we want to see.
Gardner: Now, we've certainly established that there is a problem, and that’s been evident for some time. We’ve underscored the fact that we want to get visibility, and offer new elements into an integrated environment, to take advantage of the technologies that are coming online, but not be in disruptive mode, or we certainly want to reduce the risk.
So we know there’s a problem, we know what we want to do. Now, how do you approach this technically, when you’re dealing with so many different vendors, so many variables?
Michelsen: Well, I’m the founder of a product company, and yet you don’t start by going and buying some software, installing it, and thinking you’re done. Let’s start with thinking around a new set of best practices for what this needs to look like. We frequently leverage a framework we call "the 3Cs" in order to accomplish this -- Complete, Collaborative and Continuous.
In a nutshell, we’ve got to be able to touch, from the testing point of view, all these different technologies. We have to be able to create some collaboration across all these teams, and then we have to do continuous validation of these business processes over time, even when we are not in lifecycles.
It’s a very high, broad-stroked approach to
our solutions, but essentially, drilling down to the detail with the customer, we can show them how these 3 Cs establish that predictable, highly efficient, lots-of-visibility way to do these kinds of applications.
Gardner: There must be secret sauce? There must be technology in addition to the vision and methodological approach?
Michelsen: Right. In order to get that testability across all these technologies and collaboration among all the teams and, of course, continuous validation takes tooling and technology. Of course, we provide that, which is great. I personally like it, just as, from a professional point of view, I like the fact that the way we message to the market is: "These are the ways you’ve got to go about doing it." Once you see that that is an appropriate approach for you -- then you become a great candidate for using our products.
But let’s talk about making sure that this is right for you. Then we’ll talk about our product being useful, because that really is the way the things should work. I can’t tell you how many times I’ve seen a customer who has said, “Well, we've run out and bought this ESB and now we’re trying to figure out how to use it.” I've said, “Whoa! You first should have figured out you needed it, and in what ways you would use it that would cause you to then buy it.”
It’s the other way around sometimes. That’s why we’ll start with the best practices, even though we’re not a large services firm. Then, we’ll come in with product, as we see the approach get defined.
Gardner: Are there any specific types of enterprise companies -- whether in a particular maturity around IT or suffering from certain ills or ailments -- that pique your interest to say, “Well, this is a perfect candidate for our solution and product set?” What are some of the indicators that a company is ready for this level of validation and testing?
Michelsen: There are a couple. First, the large-scale, top-down SOA initiatives clearly need this, because this is the perfect example of … interconnecting things, wrapping legacy systems in modernization, creating business-process modeling environments, increasing the pace of change, and distributed development across many different teams. SOA does all of those things for you, and certainly scratches every one of those itches that we’ve been talking about.
The other is when you go into a large integration initiative. There are a lot of partner solutions -- from companies like
TIBCO,
WebMethods,
Oracle Fusion and
SAP NetWeaver, and forgive me for not naming all of our friends. When you’re going down this kind of path, you’re going down a path to interconnect your systems in this same kind of ways. Call it service orientation or call it a large integration effort, either way, the outcome from a system’s point of view is the same.
Then, traditionally, by the time a business has been large for many years, they just have this enormous amount of technology. A classic example is a large financial institution that does fixed-asset trades. In order for one trade to place, it takes Web services and
EJBs, from
Java Swing-based application into
CORBA, into messaging, into C code, into two different databases, and out the other end of a Web application.
All of that technology, integrated together, is what the business thinks of the app. Of course, that takes hundreds of people across many different teams – U.S., Europe and Asia -- from an IT point of view. But, all of that technology together is the app. So that’s your reality. That’s where we really can sit and where these best practices really get to work.
Gardner: So when you went to enter into these organizations where there’s a pretty powerful need, what is it that they’re getting in terms of value and impact? How do they use
these tools? Then, we’ll try to ask a little bit about validation examples of what the outcomes have been.
Michelsen: What they’re doing is
adopting these best practices on a team level so that each of these individual components is getting their own tests and validation. That helps them establish some visibility and predictability. It’s just good, old-fashioned automated test coverage at the component level.
As these components start to orchestrate with each other in order to accomplish this higher-level objective -- where this component becomes a part of a larger solution -- then there’s a validation aspect to it. The application that is causing this component-to-component orchestration has a validation challenge to make sure that things continue to work over time, even in the face of change.
As these components come together, there’s a validation layer that’s put in place. At iTKO, we even have a
virtualization capability that allows you to do these kinds of things in a very agile way and without some of the constraints that you typically have. At the very end of the process, we are near the glass, if you will, of the user screen. Then you’ve got business-process level validation or testing across the whole thing. So think of it as, “Here’s a business process model that I’ve modeled in a
business process modeling (BPM) tool of choice."
The complement of that are one or more tests or validations of that particular business process, where I invoke the process and verify my technical outcomes. So that if placing an order means to do this, this, and this in these systems, you do that with a BPM tool. To validate the business process function as expected, you’ll invoke that business process with our product LISA and then make sure all of those expected outcomes occurred.
For example, the customer database is going to have an update in it, the order management systems is going to be creating a new order. The account activity system -- which might be completely independent -- the inventory system, or the shipping system, all of these things are going to have to have their expected outcomes verified in order for us to know that this system works as expected.
Gardner: This really sounds like a metaview of the integration, paths, occurrences, and requirements. It almost sounds as if you’re moving to what we used to refer to, and still do, as
application lifecycle management (ALM). But, it sounds like you’re really approaching this additionally as “integration lifecycle management.”
Michelsen: That’s a great point. In fact, we’ve heard people say, “Wow, it sounds a little bit like also
business activity monitoring (BAM), where you’re basically chasing all these transactions through the production system and making sure they are doing their thing.” Certainly, it's a valid point. But let’s be really clear. We must be capable of doing this as a part of our development cycles.
We can’t build stuff, throw it over the wall into the production system to see if it works, and then have a BAM-type tool tell us -- once it gets into the statistics -- "By the way, they’re not actually catching orders. You’re not actually updating inventory or your account. Your customer accounts aren’t actually seeing an increase in their credit balance when orders are being placed."
That’s not when you find out it doesn’t work, right? And the challenge is that’s what we do today. We largely complete these applications. We go into some user-acceptance test mode, where we have a people see if they can find any problems with this enormous amount of software, millions of lines of code. We give them a few weeks to see if they can find any bugs, and then we go to production.
We really can’t let that happen any more. These apps are too big, their connections are too many and the numbers of possible testable items are way too great. And, of course, tomorrow we invalidate all the work we just did in that human labor, when something changes somewhere.
So this is why, as a part of lifecycles, we have to do this kind of activity. In doing so, we drive into value, we
get something for having done our work.
Gardner: Clearly from my observations, there’s a struggle now under way in the market to find better ways of relating, finding the relationship and dependencies between the design time activities and the run time activities -- and then creating more of a virtual feedback set of loops that allow for this to continue without that handing off; or waiting for the red light/green light value. Tell me how you think LISA provides a bridge, or maybe a catalyst, to increased feedback values between design time and run time, particularly in an SOA environment.
Michelsen: Great question, and I’m glad that you’re seeing that as well, Dana, because we think that it's an indication that things are maturing. When we see our customers asking us, “How do I essentially do that second C of yours, collaboration? How do I better collaborate?”… we know that they’re finally seeing the pain between a siloed-based lifecycle, and testing and operations being a disjointed activity. Development and test don’t talk to each other, or with project management. And the business analysts don’t really even know each other.
We know that when we’re hearing questions around collaboration, people are becoming aware that they really needed to accomplish it. This is great. Some specifics of how our products can help is by first being a test capability that every one of the teams I just mentioned can use to do their own part of the testing effort. Developers have a test responsibility. Certainly,
quality assurance (QA) has one. Operations even has one, from a functional monitoring point of view.
The business analysts have this whole "validate the business process" activity they need to accomplish. Everyone has their part to play, and if we can provide a tool that helps all of them do their part with the same product, there’s an enormous amount of efficiency. More important, there’s a much more highly automated back channel through this lifecycle.
If a business process is not functioning as expected, that failing test case is consumable all the way back to that individual developer who can see the context in which my component is being exercised. [And that comes from seeing] the input and output, seeing the expected outcome, and seeing the unexpected actual outcome. Then I get a really good awareness of what my component is supposed to do in the context of the business process.
When we have this common tooling across the board -- instead of one way of doing it for development, one way of doing it for QA, one way for the business analyst and for operations and everything -- we get much greater collaboration.
One other important point here is that we also have an opportunity to introduce this continuous validation framework, where once we start these integration labs, those components are being delivered into that integration lab, and then into pre-production, performance labs and production. We need an infrastructure for all of this continuous validation that properly notifies whoever should be notified when failures occur.
So
our application has lots of good technology for being able to do this as well.
Gardner: Well, of course, the proof of the pudding is in the eating. Can you give us some examples of organizations that have employed these methods, and then some of these tools? Start to think in terms of the 3 Cs that you’ve outlined. What sorts of results or paybacks are there in terms of
return on investment (ROI) and TCO? What validates this?
Michelsen: A great example of this would be
Lenovo, the ThinkPad guys, where they went through a major next generation of all of their customers and partner-facing order management systems. This is
www.lenovo.com, and a number of the systems behind it. They went with a new vendor to bring in a new application and interconnected into all the existing back-end and legacy systems. It's a classic example, as I said a few minutes ago, of when this kind of activity becomes important.
Lenovo realized from their past experiences that they wanted to get better at doing this kind of activity because they didn’t want what happened to them sometime in the past, where application failures underneath the screens would be causing customer experience to degrade -- but you couldn’t even tell at the website.
They were not capturing the order, even though an order number was showing up on the Web page, and things like this. They realized this challenge was too great for them, and they brought our solution in, in order to validate all these individual components and then validate at the user’s business-process level.
They wanted to validate what it means to configure
ThinkPads, to price them, to do all of the bundling, to make sure that I can place orders, check orders, verify shipping, and do all these different things. That takes a pretty significant amount of visibility. Of course, our product has some capability to give you that visibility, because you’re going to need it.
So you have this kind of capability, and Lenovo was able to move away from, "I hope this thing continues to run." What was very possible in the past was that the customer update occurred, but the order placement didn’t -- a partial commit.
Instead of having that reality, they now have a reality on a literally continuous basis. From seven different places all over the world, we’re continuously validating the performance and functional integrity of the entire system -- both for the component level, and at what I call this orchestration level.
In doing so, they have a whole lot more confidence that the thing actually performs the way they expect it to.
Gardner: There’s no question, John, that the organizations that are advancing, that are deeply into integration issues, are looking for this business process management value, at the orchestration level.
They've moved an abstraction up in terms of the approaches, and the accomplishments of what their IT departments and systems can deliver. But, of course, any time we move up an abstraction technologically into the functions of IT, that requires the company go up a level in validation testing and quality.
It makes sense now that you’re going to see a growing market. Is there any sense that you can give us from your business as to how these things are growing now? Are people really getting to that level where they want to bring together a lifecycle approach?
Michelsen: Well, hopefully the Lenovo example means, yes. By the way, a company partner of ours named
i2 -- they see this. We all know there’s an amazing amount of effort to do a large-scale implementations of either a packaged applications or large-scale custom applications. I think we’ve done this long enough to realize that this has to be part of the way to do it.
I’m seeing that more and more. As a consequence, we are able to provide value to many customers. It’s just been thrilling. We brought our product to market in early 2003 with a single customer or two. If our growth rate is an indication -- as an IT discipline – the market has finally realized that we have to get this right, which is terrific. If you think about it, the evangelist in all of us wants to get this right, or wants to do the right thing. I’m seeing it more and more, and that’s certainly terrific.
Gardner: Great. Well, we've been discussing the issues around integration, middleware, and SOA, as well as the need to abstract value up to the integrations and into the business processes. We have talked about how these elements relate to one another, and, of course, explain
the need for greater visibility, validation and testing in these environments.
We’ve been talking about LISA and iTKO with John Michelsen, the chief architect and founder of iTKO. I appreciate your input, and we look forward to learning more about how this market evolves. It is an exciting time.
Michelsen: Thanks a lot, Dana. I appreciate the time.
Gardner: This is Dana Gardner, principal analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.
Listen to the podcast.
Download the podcast.
Find it on iTunes/iPod.
Learn more. Sponsor:
iTKO.
Transcript of BriefingsDirect podcast with iTKO's John Michelsen on validation and testing in application integrations. Copyright Interarbor Solutions, LLC, 2005-2008. All rights reserved.