Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 38. I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions.
This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system. We also come to you through the support of TIBCO Software.
Out topic this week of March 9, 2009 centers on the economics of IT. It's clear that the financial crisis has spawned a yawning global recession on a scale and at a velocity unlike anything seen since the 1930s. Yet, our businesses and our economy function much differently than they did in the 1930s. The large and intrinsic role of information technology (IT) is but one of the major differences. In fact, we haven't had a downturn like this since the advent of widespread IT.
So, how does IT adapt and adjust to the downturn? This is all virgin territory. Is IT to play a defensive role in helping to slash costs and reduce its own financial burden on the enterprise, as well as to play a role in propelling productivity forward despite these wrenching contractions?
Or, does IT help most on the offensive, in transforming businesses, or playing a larger role in support of business goals, with the larger IT budget and responsibility to go along with that? Does IT lead the way on how companies remake themselves and reinvent themselves during and after such an economic tumult?
We're asking our panel today to list the top five ways that IT can help reduce costs, while retaining full business -- or perhaps even additional business functionality. These are the top five best ways that IT can help play economic defense.
After we talk about defense, we're going to talk about offense. How does IT play the agent of change in how businesses operate and how they provide high value with high productivity to their entirely new customer base?
Join me in welcoming our analyst guests this week. Joe McKendrick, independent IT analyst and prolific blogger on service-oriented architecture (SOA), business intelligence (BI), and other major IT topics. Welcome back, Joe.
Joe McKendrick: Thanks, Dana. Glad to be here.
Gardner: We're also joined by Brad Shimmin, principal analyst at Current Analysis.
Brad Shimmin: Hello, Dana.
Gardner: Also, JP Morgenthal, independent analyst and IT consultant. Hi, JP.
JP Morgenthal: Hi. Thanks.
Gardner: We're also joined by Dave Kelly, founder and president of Upside Research, who joins us for the first time. Welcome, Dave.
Dave Kelly: Hey, Dana. Thanks for having me. It's great to be here.
Gardner: Let's go first to Joe McKendrick at the top of the list. Joe, let's hear your five ways that IT can help cut costs in enterprises during our tough times.
Previous downturns
McKendrick: First of all, I just want to comment. You said this is virgin territory for IT in terms of managing through downturns. We've seen in our economy some fairly significant downturns in the past -- the1981-82 period, 1990-91 period, and notably 2001-2002. Those were all major turning points for IT, and we can get into that later. I'll give you my five recommendations, and they're all things that have been buzzing around the industry.
First, SOA is a solution, and I think SOA is alive and well and thriving. SOA promotes reuse and developer productivity. SOA also provides a way to avoid major upgrades or the requirement for major initiatives in enterprise systems such as enterprise resource planning (ERP).
Second, virtualize all you can. Virtualization offers a method of consolidation. You can take all those large server rooms -- and some companies have thousands of servers -- and consolidate into more centralized systems. Virtualization paves the path to do that.
Third, cloud computing, of course. Cloud offers a way to tap into new sources of IT processing, applications, or IT data and allows you to pay for those new capabilities incrementally rather than making large capital investments.
The fourth is open source -- look to open-source solutions. There are open-source solutions all the way up the IT stack, from the operating system to middleware to applications. Open source provides a way to, if not replace your more commercial proprietary systems, then at least to implement new initiatives and move to new initiatives under the budget radar, so to speak. You don't need to get budget approval to establish or begin new initiatives.
Lastly, look at the Enterprise 2.0 space. Enterprise 2.0 offers an incredible way to collaborate and to tap into the intellectual capital throughout your organization. It offers a way to bring a lot of thinking and a lot of brainpower together to tackle problems.
Gardner: It sounds like you feel that IT has a lot of the tools necessary and a lot of the process change necessary. It's simply a matter of execution at this point.
McKendrick: Absolutely. All the ingredients are there. I've said before in this podcast that I know of startup companies that have invested less than $100 in IT infrastructure, thanks to initiatives such as cloud computing and open source. Other methodologies weigh in there as well.
Gardner: All right. Let's go to bachelor number two, Brad Shimmin. If you're dating IT efficiency, how are you going to get them off the mark?
Provide a wide pasture
Shimmin: Thanks, Dana. It's funny. Everything I have in my little list here really riffs off of all of Joe's excellent underlying fundamentals that was talking about there. I hope what I am going to give you guys are some not-too-obvious uses of the stuff that Joe's been talking about.
My first recommendation is to give your users a really wide pasture. There is an old saying that if you want to mend fewer fences, have a bigger field for your cattle to live in. I really believe that's true for IT.
You can see that in some experiments that have been going on with the whole BYOC -- Bring Your Own Computer -- programs that folks like Citrix and Microsoft have been engaging in. They give users a stipend to pick up their own notebook computer, bring that to work, and use a virtualized instance of their work environment on top of that computer.
That means IT no longer has to manage the device itself. They now just manage virtual image that resides on that machine. So, the idea that we've been seeing with mobile devices making a lot of headway, in terms of users buying and using their own inside IT, we'll see extend to desktops and laptops.
I'd just like to add that IT should forget about transparency and strive for IT participation. The days of the ivory tower with top-down knowledge held within secret golden keys behind locked doors within IT are gone. You have to have some faith in your users to manage their own environments and to take care of their own equipment, something they're more likely to do when it's their own and not the company's.
Gardner: So, a bit more like the bazaar, when it comes to how IT implements and operates?
Shimmin: Absolutely. You can't have this autocracy downward slope anymore to be efficient. That doesn't encourage efficiency.
The second thing I'd suggest is don't build large software anymore. Buy small software. As Joe mentioned, SOA is well entrenched now within both the enterprise and within the IT. Right now, you can buy either a software as a service (SaaS) or on-premise software that is open enough that it can connect with and work with other software packages. No longer do you need to build this entire monolithic application from the ground-up.
A perfect example of that is something like PayPal. This is a service, but there are on-premise renditions of this kind of idea that allow you to basically build up a monolithic application without having to build the whole thing yourself. Using pre-built packages, smaller packages that are point solutions like PayPal, lets you take advantage of their economies of scale, and lets you tread upon the credibility that they've developed, especially something that's good for consumer facing apps.
The third thing I'd suggest -- and this is in addition to that -- build inside but host outside. You shouldn't be afraid to build your own software, but you should be looking to host that software elsewhere.
A game changer
We've all seen both enterprises and enterprise IT vendors -- independent software vendors (ISVs) themselves like IBM, Oracle, and Microsoft, in particular -- leaping toward putting their software platforms on top of third-party cloud providers like Amazon EC2. That is the biggest game changer in everything we've been talking about here to date.
There's a vendor -- I can't say who it is, because they didn't tell I could talk about it -- who is a cloud and on-premise vendor for collaboration software. They have their own data centers and they've been moving toward shutting down the data centers and moving that into Amazon's EC2 environment. They went from these multi-multi thousand dollar bills they are paying every month, to literally a bill that you would get for such a cellphone service from Verizon or AT&T. It was a staggering saving they saw.
Gardner: A couple of hundred bucks a month?
Shimmin: Exactly. It's all because the economies are scaled through that shared environment.
The fourth thing I would want to say is "kill your email." You remember the "Kill your TV" bumper stickers we saw in the '90s. That should apply to email. It's seen its day and it really needs to go away. For every gigabyte you store, I think it's almost $500 per user per year, which is a lot of money.
If you're able to, cut that back by encouraging people to use alternatives to email, such as social networking tools. We're talking about IM, chat, project group-sharing spaces, using tools like Yammer inside the enterprise, SharePoint obviously, Clearspace -- which has just been renamed SBS, for some strange reason -- and Google Apps, That kind of stuff cuts down on email.
I don't know if you guys saw this, but in January, IBM fixed Lotus Notes so they no longer store duplicate emails, They were cutting down on the amount of storage their users required by something like 70 percent, which is staggering.
Gardner: So what was that, eliminating the multiple versions of any email, right?
Shimmin: It was the attachments, yes. If there was a duplicate attachment, they stored one for each note instead of saying, "Hey, it's the same file, let's just store one instance of it in a database." Fixing stuff like that is just great, but it points to how big a problem it is to have everything running around in email.
Gardner: You might as well just be throwing coal up into the sky, right?
Shimmin: Exactly. To add to that, we should really turn off our printers. By employing software like Wikis, blogs, and online collaboration tools from companies like Google and Zoho, we can get away from the notion of having to print everything. As we know, a typical organization kills 143 trees a year -- I think was the number I heard -- which is a staggering amount of waste, and there's a lot of cost to that.
Gardner: Perhaps the new bumper sticker should be "Email killed."
Open, but not severe
Shimmin: Printing and email killed, right. My last suggestion would be, as Joe was saying, to really go open, but we don't have to be severe about it. We don't have to junk Windows to leverage some cost savings. The biggest place you can see savings right now is by getting off of the heavy license burden software. I'm going to pick on Office right now.
Gardner: How many others do you have to pick from?
Shimmin: It's the big, fat cow that needs to be sacrificed. Paying $500-800 a year per user for that stuff is quite a bit, and the hardware cost is staggering as well, especially if you are upgrading everyone to Vista. If you leave everyone on XP and adopt open-source solutions like OpenOffice and StarOffice, that will go a long, long way toward saving money.
Why I'm down on printing is, the time is gone when we had really professional, beautiful-looking documents that required a tremendous amount of formatting and everything needed to be perfect within Microsoft Word, for example. What now counts is the information. It's same for 4,000-odd features in Excel. I'm sure none of us here have ever even explored a tenth of those.
Gardner: Maybe we should combine some of the things you and Joe have said. We should go to users and say, "You can use any word processor you want, but we're not going to give you any money," and see what they come up with.
Shimmin: You're going to find some users who require those 4,000 features and you are going to need to pay for that software, but giving everyone a mallet to crack a walnut is insane.
Gardner: I want to go back quickly to your email thing. Are you saying that we should stop using email for communication or that we should just bring email out to a cloud provider and do away with the on-premises client server email -- or both.
Shimmin: Thanks for saying that. Look at software or services like Microsoft Business Productivity Online Suite (BPOS). You can get Exchange Online now for something like $5 per month per user. That's pretty affordable. So, if you're going to use email, that's the way to go. You're talking about the same, or probably better, uptime than you're getting internally from a company like Microsoft with their 99.9 percent uptime that they're offering. It's not five 9s, but it's probably a lot better than what we have internally.
So, yeah. You should definitely explore that, if you're going to use email. In addition to that, if you can cut down on the importance of email within the organization by adopting software that allows users to move away from it as their central point of communication, that is going to save a lot of money as well.
Gardner: Or, they could just Twitter to each other and then put all the onus on the cost of maintaining all those Twitter servers.
Shimmin: Nobody wants to want to pay for that, though.
Gardner: Let's go to JP Morgenthal. I'm expecting "shock and awe" from you, JP. What's your top five?
Morgenthal: Shock and awe, with regard to my compadres' answers?
Gardner: Oh, yeah. Usually you have a good contrarian streak.
The devastation of open source
Morgenthal: I was biting my tongue, especially on the open source. I just went through an analysis, where the answer was go JBoss on Linux Apache. Even in that, I had given my alternative viewpoint that from a cost perspective, you can't compare that stack to running WebSphere, or WebLogic on Windows. Economically, if you compare the two, it doesn't make sense. I'm still irked by the devastation that open source has created upon the software industry as a whole.
Gardner: Alright. We can't just let that go. What do you mean, quickly?
Morgenthal: Actually, I blogged on this. Here's my analogy. Imagine tomorrow if Habitat for Humanity all of a sudden decided that it's going to build houses for wealthy people and then make money by charging maintenance and upkeep on the house. You have open source. The industry has been sacrificed for the ego and needs of a few against the whole of the industry and what it was creating.
Gardner: Okay. This is worth an entire episode. So, we're going to come back to this issue about open source. Is it good? Is it bad? Does it save money or not? But, for this show, let's stick to the top five ways to save IT, and we'll come back and do a whole show on open source.
Morgenthal: I'd like to, but I've got to give credit. I can't deny the point that as a whole, for businesses, again, those wealthy homeowners who are getting that Habitat for Humanity home, hey, it's a great deal. If somebody wants to dedicate their time to build you a free home, go for it, and then you can hire anybody you like to maintain that home. It's a gift from the gods.
Gardner: What are your top five?
Morgenthal: Vendor management is first. One thing I've been seeing a lot is how badly companies mismanage their vendor relationships. There is a lot of money in there, especially on the IT side -- telecom, software, and hardware. There's a lot of play, especially in an industry like this.
Get control over your vendor relationships. Stop letting these vendors run around, convincing end-users throughout your business that they should move in a particular direction or use a particular product. Force them to go through a set of gatekeepers and manage the access and the information they're bringing into the business. Make sure that it goes through an enterprise architecture group.
Gardner: It's a buyers market. You can negotiate. In fact, you can call them in and just say, "We want to scrap the old license and start new." Right?
Morgenthal: Well, there are legal boundaries to that, but certainly if they expect to have a long-term relationship with you through this downturn, they've got to play some ball.
With regard to outsourcing noncritical functions, I'll give you a great example where we combined an outsourced noncritical function with vendor management in a telco. Many companies have negotiated and managed their own Internet and telco communications facilities and capability. Today, there are so many more options for that.
It's a very complex area to navigate, and you should either hire a consultant who is an expert in the area to help you negotiate this fact, or you should look the scenario where you take as much bandwidth as you use on an average basis, and when you need excess bandwidth, team in the cloud. Go to the cloud for that excess bandwidth.
Gardner: Okay, number three.
Analyze utilization
Morgenthal: Utilization analysis. Many organizations don't have a good grasp on how much of their CPU, network, and bandwidth is utilized. There's a lot of open space in that utilization and it allows for compression. In compressing that utilization, you get back some overhead associated with that. That's a direct cost savings.
Another area that has been a big one for me is data quality. I've been trying to tell corporations for years that this is coming. When things are good, they've been able to push off the poor data quality issue, because they can rectify the situation by throwing bodies at it. But now they can't afford those bodies anymore. So, now they have bad data and they don't have the bodies to fix up the data on the front end.
Here is a really bad rock and hard place. If I were them, I'd get my house in order, invest the money, set it aside, get the data quality up and allow myself to operate more effectively without requiring extra labor on the front end to clean up the data on the back end.
Finally, it's a great time to explore desktop alternatives, because Windows and the desktop has been a de-facto standard, a great way to go -- when things are good. When you're trying to cut another half million, million, or two million out of your budget, all those licenses, all that desktop support, start to add up. They're small nickels and dimes that add up.
By looking at desktop alternatives, you may be able to find some solutions. A significant part of your workforce doesn't need all that capability and power. You can then look for different solutions like light-weight Linux or Ubuntu-type environments that provide just Web browsing and email, and maybe OpenOffice for some light-weight word processing. For a portion of your user base, it's all they need.
Gardner: Okay. Was that four or five?
Morgenthal: That's five -- vendor management, outsourcing, utilization analysis, data quality, and desktop alternatives.
Gardner: Excellent. Okay. Now, going to you, Dave Kelly, what's your top five?
Optimize, optimize, optimize
Kelly: Thanks, Dana, and it's great to come at the end. I don't always agree with JP, but I liked a lot of the points that he just made and they complement some of the ones that I am going to make, as well as the comments that Brad and Joe made.
My first point would be, optimize, optimize, optimize. There's no doubt that all the organizations, both on the business side and the IT side, are going to be doing more with less. I think we're going to be doing more with less than we have ever seen before, but that makes it a great opportunity to step back and look at specific systems and business processes.
You can start at the high level and go through business process management (BPM) type optimization and look at the business processes, but you can also just step it down a level. This addresses what some of the other analysts have said here. If you look at things like data-center optimization, there are tremendous opportunities for organizations to go into their existing data centers and IT processes to save money and defer capital investment.
You're talking about things like increasing the utilization of your storage systems. Many organizations run anywhere from 40 to 50 percent of storage utilization. If you can increase that and push off new investments in additional storage, you've got savings right there. The growth rate in storage over the past three to five years has been tremendous. This is a great opportunity for organizations to save money.
It also references what Brad said. You've got the same opportunity on the email side. If you look at your infrastructure on the data-center side or the storage side, you've got all this redundant data out there.
You can use applications. There are products from Symantec and other vendors that allow you to de-duplicate email systems and existing data. There are ways to reduce your backup footprint, so that you have fewer backup tapes required. Your processes will run quicker, with less maintenance and management. You can do single-instance archiving and data compression.
Gardner: Dave, it sounds like you're looking at some process re-engineering in the way that IT operates.
Kelly: You can certainly do that, but you don't even have to get to that process re-engineering aspect. You can just look at the existing processes and say, "How can I do individual components more efficiently." I guess it is process reengineering, but I think a lot of people associate process reengineering with a large front-to-back analysis of the process. You can just look at specific automated tasks and see how you can do more with less in those tasks.
There are a lot of opportunities there in terms of like data center optimization as well as other processes.
The next point is that while it's important to increase your IT efficiency, while reducing cost, don't forget about the people. Think about people power here. The most effective way to have an efficient IT organization is to have effective people in that IT organization.
Empower your people
There's a lot of stress going on in most companies these days. There are a lot of question about where organizations and businesses are going. As an IT manager, one thing you need to do is make sure that your people are empowered to feel good about where they're at. They need to not hunker down and go into this siege mentality during these difficult times, even if the budgets are getting cut and there's less opportunity for new systems or new technology challenges. They need to redirect that stress to discover how the IT organization can benefit the business and deal with these bad times.
You want to help motivate them through the crisis and work on a roadmap for better days, and map out, "Okay, after we get through this crisis, where are we going to be going from here?" There's an important opportunity in not forgetting about the people and trying to motivate them and provide a positive direction to use their energy and resources in.
Gardner: They don't want to get laid off these days, do they?
Kelly: No, they don't. Robert Half Technology recently surveyed 1,400 CIOs. It's pretty good news. About 80 percent of the CIOs expect to maintain current staffing levels through the first half of this year. That's not a very long lead-time at this point, but it's something. About 8 or 9 percent expected to actually hire. So everyone is cutting budgets, reducing capital expenditures, traveling less, trying to squeeze the money out of the budget, but maybe things will stay status quo for a while.
The third point echoes a little bit of what JP said on the vendor management side, as well as on using commercial software. Organizations use what they have or what they can get. Maybe it's a good time to step back and reevaluate the vendors. That speaks to JP's vendor management idea, and the infrastructure they have.
So, you may have investments in Oracle, IBM, or other platforms, and there may be opportunities to use free products that are bundled as part of those platforms, but that you may not be using.
For example, Oracle bundles Application Express, which is a rapid application development tool, as part of the database. I know organizations are using that to develop new applications. Instead of hiring consultants or staffing up, they're using existing people to use this free rapid application development tool to develop departmental applications or enterprise applications with this free platform that's provided as part of their infrastructure.
Of course, open source fits in here as well. I have a little question about the ability to absorb open source. Perhaps at the OpenOffice level, I think that's a great idea. At the infrastructure level and at the desktop level that can be a little bit more difficult.
The fourth point, and we've heard this before, is go green. Now is a great time to look at sustainability programs and try to analyze them in the context of your IT organization. Going green not only helps the environment, but it has a big impact, as you're looking at power usage in your data center with cooling and air conditioning cost. You can save money right there in the IT budget and other budgets going to virtualization and consolidating servers. Cutting any of those costs can also prevent future investment capital expenditures.
Again, as JP said about utilization, this is a great opportunity to look at how you're utilizing the different resources and how you can potentially cut your server cost.
Go to lunch
Last but not least, go to lunch. It's good to escape stressful environments, and it may be a good opportunity for IT to take the business stakeholders out to lunch, take a step back, and reevaluate priority. So, clear the decks and realign priorities to the new economic landscape. Given changes in the business and in the way that services and products are selling, this may be a time to reevaluate the priorities of IT projects, look at those projects, and determine which ones are most critical.
You may be able to reprioritize projects, slow some down, delay deployments, or reduce service levels. The end effect here is allowing you to focus on the most business critical operations and applications and services. That gives a business the most opportunity to pull out of this economic dive, as well as a chance to slow down and push off projects that may have had longer-term benefits.
For example, you may be able to reduce service levels or reduce the amount of time the help desk has to respond to a request. Take it from two hours to four hours and give them more time. You can potentially reduce your staffing levels, while still serving the business in a reasonable way. Or, lengthen the time that IT has after a disaster to get systems back up and operating. Of course, you've got to check that with business leaders and see if it's all right with them. So, those are my top five.
Gardner: Excellent, thank you. I agree that we're in a unique opportunity, because, for a number of companies, their load in the IT department is down, perhaps for the first time. We've been on a hockey-stick curve in many regards in the growth of data and the number of users, seats, and applications supported.
Companies aren't merging or acquiring right now. They're in kind of stasis. So, if your load is down in terms of headcount, data load, newer applications, now is an excellent time to make substantial strategic shifts in IT practices, as we've been describing, before that demand curve picks up again on the other side, which its bound to do. We just don't know when.
As the last panelist to go, of course, I am going to have some redundancy on what's been said before, but my first point is, now is the time for harsh triage. It is time to go in and kill the waste by selectively dumping the old that doesn't work. It's easiest to do triage now, when you've got a great economic rationale to do it. People will actually listen to you, and not have too much ability to whine, cry and get their way.
IT really needs to find where it's carrying its weight. It needs to identify the apps that aren't in vigorous use or aren't adding value, and either kill them outright or modernize them. Extract the logic and use it in a process, but not at the cost of supporting the entire stack or a Unix server below it.
IT needs to identify the energy hogs and the maintenance black holes inside their infrastructure and all the inventory that they are supporting. That means ripping out the outdated hardware. Outdated hardware robs from the future in order to pay for a diminishing return in the past. So, it's a double whammy in terms of being nonproductive and expensive.
You don't really need to spend big money to conduct these purges. It's really looking for the low-lying fruit and the obvious wasteful expenditures and practices. As others have said today, look for the obvious things that you're doing and never really gave much thought to. They are costing you money that you need to do the new things in order to grow. It's really applying a harsh cost-benefit analysis to what you are doing.
It would also make sense to reduce the number of development environments. If you're supporting 14 different tools and 5 major frameworks, it's really time to look at something like Eclipse, Microsoft, or OSGi and say, "Hey, we're going to really work toward more standardization around a handful of major development environments. We're going to look for more scripting and doing down and dirty web development when we can." That just makes more sense.
It's going to be harder to justify paying for small tribes of very highly qualified and important, but nonetheless not fully utilized, developers.
Look outside
It's also time to replace costly IT with outside services and alternatives that we have discussed. That would include, as Brad said, your email, your calendar, word processing, and some baseline productivity applications and consider where you can do them cheaper.
I do like the idea of saying to people, "You still need to do email and you need still to do word processing, but we no longer are going to support it. Go find an alternative and see how that works." It might be an interesting experiment at least for a small department level at first.
That means an emphasis on self-help, and in many aspects of IT it is possible. Empower the users. They want that power. They want to make choices. We don't need to just walk them down a blind path, tell them how to do mundane IT chores, and then pay an awful lot of money to have them doing it that way. Let's open up, as Brad said, the bazaar and stop being so much of a cathedral.
I suppose that means more use of SaaS and on-demand applications. They make particular sense in customer relationship management (CRM), sales force, and in human resources procurement and payroll. It's really looking to outsource baseline functionality that's not differentiating your organization. It's the same for everybody. Find the outsourcers that have done it well and efficiently and get it outside of your own company. Kill it, if you are doing it internally.
It's really like acting as a startup. You want to have low capital expenditures. You want to have low recurring costs. You want to be flexible. You want to empower your users. A lot of organizations need to think more like a startup, even if they are an older, established multinational corporation.
My second point is to create a parallel IT function that leverages cloud attributes. This focuses again on what Joe mentioned, on the value of virtualization and focusing on the process and workflows -- not getting caught up in how you do it, but what it ends up doing for you.
The constituent parts aren't as important as the end result. That means looking to standardize hardware, even if it's on-premises, and using grid, cloud, and modernized and consolidated data center utility best practices. Again, it's leveraging a lot of virtualization on standard low-cost hardware, and then focusing the value at a higher abstraction, at the process level.
It's standardizing more use of appliances and looking at open-source software. I also have to be a little bit of a contrarian to JP. I do think there's a role for open source in these operations, but we are going to save that for another day. That's a good topic.
This is another way of saying doing SOA, doing it on-premises, using cloud and compute fabric alternatives, and trying to look outside for where other people have created cloud environments that are also very efficient for those baseline functions that don't differentiate. That creates a parallel function in IT, but also looks outside.
I agree wholeheartedly with what's been said earlier about the client. It's time to cheapen, simplify, and mobilize the client tier. That means you can use mobile devices, netbooks, and smart phones to do more activities, to connect to back-end data and application sets and Web applications.
Focus on the server
It's time to stop spending money on the client. Spend it more on the server and get a higher return on that investment. That includes the use of virtual desktop infrastructure (VDI) and desktop-as-a-service (DaaS) types of activities. It means exploring Linux as an operating environment on the desktop, where that makes sense, and look at what the end users are doing with these clients.
If they're at a help desk and they're all using three or four applications in a browser, they don't need to have the equivalent of a supercomputer that's got the latest and greatest of everything. It's time to leverage browser-only workers. Find workers that can exist using only browsers and give them either low-cost hardware that's maybe three or four years old and can support a browser well or deliver that browser as an application through VDI. That's very possible as well.
It means centralizing more IT support, security, and governance at the data center. It even means reducing the number of data centers, because given the way networks are operating, we can do this across a wide area network (WAN). We can use acceleration, remote branch technologies, and virtual private networks (VPNs). We can deliver these applications to workers across continents and even across the globe, because we're not dealing with a B2C, we are dealing with a B2E -- that is, to your employees.
You can support the scale with fewer data centers and lower cost clients. It's a way to save a lot of money. Again, you're going to act like a modern startup. You're going to build the company based on what your needs are, not on what IT was 15 years ago.
My fourth point is that BI everywhere. Mine the value of the data that you've got already and the data that you are going to create. Put in the means to be able to assess where your IT spend makes sense. This is BI internal to IT, so BI for IT, but also IT enabling BI across more aspects of the business at large.
Know what the world is doing around you and what your supply chain is up to. It's time to join more types of data into your BI activities, not just your internal data. You might be able to actually rent data from a supplier, a partner or a third-party, bring that third-party data in, do a join, do your analysis, and then walk away. Then, maybe do it again in six months.
It's time to think about BI as leveraging IT to gain the analysis and insights, but looking in all directions -- internal, external, and within IT, but also across extended enterprise processes.
It's also good to start considering tapping social networks for their data, user graph data, and metadata, and using that as well for analysis. There are more and more people putting more and more information about themselves, their activities, and their preferences into these social networks.
That's a business asset, as far as I'm concerned. Your business should start leveraging the BI that's available at some of these social networks and join that with how you are looking at data from your internal business activities.
Take IT to the board level
Last, but not least, it's time for IT to be elevated to the board level. It means that the IT executive should be at the highest level of the business in terms of decision and strategy. The best way for IT to help companies is to know what those companies are facing strategically as soon as they're facing it, and to bring IT-based solutions knowledge to the rest of the board. IT can be used much more strategically at that level.
IT should be used for transformation and problem solving at the innovation and business-strategy level, not as an afterthought, not as a means to an end, but actually as part of what ends should be accomplished, and then focusing on the means.
That is, again, acting like a startup. If you talk to any startup company, they see IT as an important aspect of how they are going to create value, go to market cheaply, and behave as an agile entity.
That's the end of my five. Let's take the discussion for our last 10 minutes to how IT can work on the offense. I'll go first on this one. I think it's time to go green field. It's time to look at software as a differentiator.
The reason I bring this up is Marc Andreessen, who is starting a venture capital fund with Ben Horowitz. They were both at Opsware together and then at HP, after they sold. Andreesen told Charlie Rose recently that there is a tragic opportunity from our current economic environment. A number of companies are going to go under or they're going to be severely challenged. Let's take a bank, for example.
A bank is going to perhaps be in a situation where its assets are outstripped by its liabilities and there is no way out. But, using software, startups, and third-party services, as Andreessen said, you can start an Internet bank. It's not that difficult.
You want to be able to collect money, lend it out with low risk at a sufficient return, and, at the end of the day, have a balance sheet that stands on its own two feet. Creating an Internet bank, using software and using services combined from someone like PayPal and others makes a tremendous amount of sense, but that's just an example.
There are many other industries, where, if the old way of doing it is defunct, then it's time to come in and create an alternative. Internet software-based organizations can go out and find new business where the old companies have gone under. It doesn't necessarily mean it's all the software, but the business value is in how you coordinate buyers and sellers and efficiencies using software.
Take something like Zipcar. They're not in the automotive business, but they certainly allow people to gain the use of automobiles at a low price point.
I'd like to throw out to the crowd this idea of going software, going green field, creating Internet alternatives to traditional older companies. Who has any thoughts about that?
Morgenthal: On the surface there are some really good concepts there. What we need are state and federal governances and laws to catch up to these opportunities. A lot of people are unaware of the potential downside risks to letting the data out of your hands into a third-party candidate's hands. It's questionable whether it's protected under the Fourth Amendment, once you do that.
There are still some security risks that have yet to be addressed appropriately. So, we see some potential there for the future. I don't know what the future would look like. I just think that there is some definite required maturity that needs to occur.
Gardner: So, it's okay to act like a startup, but you still need to act like a grownup.
Morgenthal: Right.
Gardner: Any other thoughts on this notion of opportunity from tragedy in the business, and that IT is an important aspect of doing that?
Evolving enterprises
McKendrick: I agree with what you're saying entirely. You mentioned on a couple of occasions that large enterprises need to act like small businesses. About 20 years ago, the writer John Naisbitt was dead-on with the prediction that large enterprises are evolving into what he called confederations of entrepreneurs. Large companies need to think more entrepreneurially.
A part of that thinking will be not the splitting up, but the breaking down of large enterprises into more entrepreneurial units. IT will facilitate that with the Enterprise 2.0 and Web 2.0 paradigm, where end users can kind of shape their own destiny. You can build a business in the cloud. There is a need for architecture; and I preach that a lot, but smaller departments of large corporations can kind of set their own IT direction as well with the availability.
Gardner: We're almost out of time. Any other thoughts about how IT is on the offensive, rather than just the defensive in terms of helping companies weather the downturn?
Shimmin: I agree with what you guys have been saying about how companies can behave like startups. I'd like to turn it around a little bit and suggest that a small company can behave like a large company. If you have a data center investment already established, you shouldn't be bulldozing it tomorrow to save money. Perhaps there's money in "them thar hills" that can be had.
Look at the technologies we have today, the cloud-enablement companies that are springing up left and right, and the ability to federate information and to loosely coupled access methods to transact between applications. There's no reason that the whole idea that we saw with the SETI@home and the protein folding ideas can't be leveraged within the company's firewalls and data centers externalize. Maybe it's storage, maybe it's services, maybe it's an application or service that the company has created, that can be leveraged to make money. It's like the idea of a house putting a windmill in and then selling electricity back to the power grid.
Gardner: Last thoughts?
Kelly: I would add one or two quick points here. Going on the offense, one opportunity is to take advantage of the slowdown and look at those business processes that you haven't gotten to in a long time, because things have been so hectic over the past couple of years. It may be a great time to reengineer those using some of the new technologies that are out there, going to the cloud, doing some of the things we've already talked about.
The other option here is that it may be a good time to accelerate new technology adoption. Move to YouTube for video-based training, or use Amazon's Kindle for distributing repair manuals electronically. Look at what the options are out there that might allow you to remake some of these processes using new technologies and allow you to profit and perhaps even grow the business during these tough economic times.
Gardner: So economic pain becomes the mother of all invention.
Kelly: Exactly.
McKendrick: We've seen it happen before. Back in 1981-1982 was when we saw the PC revolution. The economy was in just as bad a shape, if not worse, than it is now. Unemployment was running close to 10 percent. The PC revolution just took off and boomed during that time. A whole new paradigm had evolved.
Gardner: Very good. Well, I would like to thank our panelists this week. We've been joined by Joe McKendrick, independent IT analyst and prolific blogger. Also, Brad Shimmin, principal analyst at Current Analysis; JP Morgenthal, independent analyst and IT consultant; and Dave Kelly, founder and president of Upside Research. Thanks to all. I think we've come up with a lot of very important and quite valuable insights and suggestions.
I'd also like to thank our charter sponsor for the BriefingsDirect Analyst Insights Edition podcast series, Active Endpoints, maker of the ActiveVOS visual orchestration system, as well as the support of TIBCO Software.
This is Dana Gardner, principal analyst at Interarbor Solutions. Thanks for listening, and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Charter Sponsor: Active Endpoints. Also sponsored by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 38 on how businesses should react to the current economic realities and prepare themselves to emerge stronger. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Sunday, March 22, 2009
Webinar: Modernization Pulls New Value From Legacy and Client-Server Enterprise Applications
Transcript of a BriefingsDirect webinar with David McFarlane and Adam Markey on the economic and productivity advantages from application modernization.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.
Announcer: Hello, and welcome to a special BriefingsDirect presentation, a podcast created from a recent Nexaweb Technologies Webinar on application modernization.
The webinar examines how enterprises are gaining economic and productivity advantages from modernizing legacy and older client-server applications. The logic, data, and integration patterns' value within these older applications can be effectively extracted and repurposed using tools and methods, including those from Nexaweb. That means the IT and business value from these assets can be reestablished as Web applications on highly efficient platforms.
We'll learn how Nexaweb has worked with a number of companies to attain new value from legacy and client-server applications, while making those assets more easily deployed as rich, agile Web applications and services. Those services can then be better extended across modern and flexible business processes.
On this podcast, we'll hear from Dana Gardner, principal analyst at Interarbor Solutions, as well as David McFarlane, COO at Nexaweb, and then Adam Markey, solution architect at Nexaweb.
First, welcome our initial presenter, BriefingsDirect's Dana Gardner.
Dana Gardner: We're dealing with an awful lot of applications out there in the IT world. It's always astonishing to me, when I go into enterprises and ask them how many applications they have in production, that in many cases they don't know. In the cases where they do know, they're usually off by about 20 or 30 percent, when they go in and do an audit.
In many cases, we're looking at companies that have been around for a while with 10 or 20 years worth of applications. These can be on mainframe. They can be written in COBOL. They could be still running on Unix platforms. In a perfect world we'd have an opportunity to go in and audit these, sunset some, and re-factor others.
Today, however, many organizations are faced with manpower and labor issues. They've got skill sets that they can't bring in, even if they wanted to, for some of these older applications. There is, of course, a whole new set of applications that might not be considered legacy, but that are several years old now. These are N-tier and Java, distributed applications, .NET, COM, DCOM, a whole stew in many organizations.
What I am asking folks to do, now that we're into a situation where economics are probably more prominent than ever -- not that that's not usually the case in IT -- is to take a look at what applications are constraining their business. Not so much to worry about what the technology is that they are running on or what the skill sets are, but to start factoring what new initiatives they need to do and how can they get their top line and bottom line as robust as possible? How do they get IT to be an enabler and not viewed as a cost center?
This is really where we should start thinking about modernizing and transforming IT -- getting application functionality that is essential, but is in someway handicapping what businesses want to do.
We want to exploit new architectures and bring more applications into association with them. It's not just architectures in terms of technology, but approaches and methodologies like service-oriented architecture (SOA), or what some people call Web-oriented architecture (WOA), looking to take advantage of interfaces and speed of innovation so that organizations can start to improve productivity for their internal constituents, in this case usually employees or partners.
Then, increasingly because of the difficulty in bringing about new business during a period of economic downturn, they're reaching out through the Internet, reaching out through the channels that are more productive, less costly and utilizing applications to focus on new business in new ways.
SOA and mobile devices
Increasingly, as I mentioned, this involves SOA, but it also increasingly involves mobile. We need to go out and reach people through their mobile Internet devices, through their iPhone and their BlackBerry, and a host of other devices at the edge. You need to be able to do that with applications and you need to be able to do it fast.
So, the goal is flexibility in terms of which applications and services need to reach new and older constituencies at less cost and, over time, reduce the number of platforms that you are supporting, sunset some apps, bring them into a new age, a new paradigm, and reduce your operating costs as a result.
Information really is the goal here, even though we are, with a handful of applications, starting to focus on the ones that are going to give us the biggest bang for the buck, recognizing that we need to go in and thoughtfully approach these applications, bring them into use with other Web services and Web applications, and think about mashups and Enterprise 2.0 types of activities. That involves expanding the use of these new methodologies.
One of the things that's interesting about companies that are aggressively using SOA is they also happen to be usually aggressive in using newer development platforms and tools. They're using dynamic languages, Web interfaces, and rich Internet application (RIA) interfaces. This is what's allowing them to take their newer applications and bring them into a services orientation reuse. Some of those services can be flexible and agile.
That's not to say you can't do some of those things with the older applications as well. In many cases, tools are being brought about and third-party inputs, in terms of professional services and guidance, are coming around. I'm recommending to people to respond more quickly, to save operational costs, to get agile and reach out through these new edge devices and/or the Internet, and do it in a fairly short order.
It's amazing to me that for those companies that have moved in this direction, they can get applications out the door in weeks rather than months, and in many cases, you can transform and modernize older applications on aging platforms just as quickly.
We want to move faster. We want to recognize that we need a higher payoff, because we also recognize that the line-of-business people, those folks that are tasked with developing new business or maintaining older business, are in a rush, because things are changing so quickly in the world around us. They often need to go at fast-break or breakneck speed with their business activities. They're going to look at IT to be there for them, and not be a handicap or to tell them that they have to wait in line or that this project is going to be six to eight months.
So, we need to get that higher agility and productivity, not just for IT, but for the business goals. Application modernization is an important aspect of doing this.
How does modernization fit in? It's not something that's going to happen on its own, obviously. There are many other activities, approaches, and priorities that IT folks are dealing with. Modernizing, however, fits in quite well. It can be used as a way to rationalize any expenditure around modernization, when you factor in that you can often cut your operating costs significantly over time.
You can also become greener. You can use less electricity, because you're leveraging newer systems and hardware that are multi core and designed to run with better performance in terms of heat reduction. There are more options around cloud computing and accessing some services or, perhaps, just experimenting with application development and testing on someone else's infrastructure.
By moving towards modernization you also set yourself up to be much more ready for SOA or to exploit those investments you have already made in SOA.
Compliance benefits
There are also compliance benefits for those organizations that are facing payment-card industry (PCI) standards in financial or other regulatory environments, freeing up applications in such a way that you can develop reports, share the data, and integrate the data. These are all benefits to your compliance issues as well.
As I mentioned earlier, by moving into a modernization for older applications, you've got the ability to mash up and take advantage of these newer interfaces, reuse, and extended application.
There is a whole host of rationalizations and reasons to do this from an IT perspective. The benefits are much more involved with these business issues and developer satisfaction, recognizing that if you are going to hire developers, you are going to be limited in the skill sets. You want to find ones that are able to work with the tools and present these applications and services in the interfaces that you have chosen.
Keeping operations at a lower cost, again, is an incentive, and that's something they can take out to their operating and financial officers and get that backing for these investments to move forward on application modernization and transformation.
One of the questions I get is, "How do we get started? We've identified applications. We recognized the business agility benefits. Where do we look among those applications to start getting that bang for the buck, where to get modern first?"
Well, you want to look at applications that are orphans in some respect. They're monolithic. They're on their own -- dedicated server, dedicated hardware, and dedicated stack and runtime environment, just for a single application.
Those are good candidates to say, "How can we take that into a virtualized environment?" Are there stacks that can support that same runtime environment on a virtualized server, reduce your hardware and operating costs as a result? Are they brittle?
Are there applications that people have put a literal and figurative wall around saying, "Don't go near that application. If we do anything to it, it might tank and we don't have the documentation or the people around to get it back into operating condition. It's risky and it's dangerous."
Conventional wisdom will say don't go near it. It's better to say, "Listen, if that's important to our business, if it's holding our business back, that's a great target for going in and finding a way to extract the logic, extract the data and present it as something that's much more flexible and easy to work with."
You can also look for labor issues. As I said, if skills have disappeared, why wait for the proverbial crash and then deal with it? It's better to be a little bit proactive.
We also need to look at what functional areas are going to be supporting agility as these business requirements change. If you're an organization where you've got supply chain issues, you need to find redundancy. You need to find new partners quickly. Perhaps some have gone out of business or no longer able to manufacture or supply certain parts. You need to be fleet and agile.
If there are applications that are holding you back from being able to pick and choose in a marketplace more readily, that's a functional area that's a priority for getting out to a Web interface.
Faster, better, cheaper
People are going to be looking to do things faster, better, cheaper. In many cases those innovative companies that are coming to market now are doing it all through the Web, because they are green-field organizations themselves. They are of, for, and by the Web. If you're going to interact with them and take advantage of the cost, innovation, and productivity benefits they offer, your applications need to interrelate, operate, and take advantage of standards and Web services to play with them.
You also need to take a look at where maintenance costs are high. We've certainly seen a number of cases where by modernizing applications you have reduced your cost on maintenance by 20 or 30 percent, sometimes even more. Again, if this is done in the context of some of these larger initiatives around green and virtualization, the savings can be even more dramatic.
I also want to emphasize -- and I can't say it enough -- those SOA activities shouldn't be there for just the newer apps. The more older apps you bring in, the more return on investment you get for your platform modernization investments, as well as saving on the older platform costs, not to mention those productivity and agility benefits.
We also need to think about the data. In some cases, I have seen organizations where they have applications running and aren't really using the application for other than as an application repository for the data. They have a hard time thinking about what to do with the data. The application is being supported at high cost, and it's a glorified proprietary database, taking up app server and rack space.
If you're looking at applications that are more data centric in their usage, why not extract that data, find what bits of the logic might still be relevant or useful, put that into service orientation, and reduce your cost, while extending that data into new processes and new benefits.
It's also important to look at where the technical quality of an app is low. Many companies are working with applications that were never built very well and never performed particularly well, using old kludgy interfaces. People are not as productive and sometimes resist working with them. These are candidates for where to put your wood behind your arrow when it comes to application modernization.
In beginning the process, we need to look at the architecture targets. We need to think about where you're going to put these applications if you are refactoring them and bringing them into the Web standards process.
It's important to have capacity. We want to have enough architecture, systems, and runtime in place. We should think about hosting or collocation, where you can decrease your cost and the risk of capital expenditure, but at the same time, still have a home for these new apps.
You certainly don't want to overextend and build out platforms without the applications being ready. It's a bit of a balancing act -- making sure you have enough capacity, but at the same time performing these modernization transformation tasks. You certainly don't want to transform apps and not have a good home for them.
Also important is an inventory of these critical apps, based on some of the criteria, we have gone through.
Crawl, walk, run
The nice thing about creating the categorization is that once you've got some processes in place on how to go about this, with one application you can extend that to others. The crawl-walk-run approach makes a great deal of sense, but when you've learned to crawl well, extend and reuse that to walk well, and then scale it from there.
This construction, deconstruction, rationalization process should also be vetted and audited in the sense that you can demonstrate paybacks. We don't want to demonstrate cost centers becoming larger cost centers. We want to show, at each step of the way, how this is beneficial in cost as well as productivity. Then, we need to focus continually on these business requirements, to make a difference and enhance these business processes.
There are some traps. It's easier said than done. It's complicated. You need to extract data carefully. If you start losing logic and access to data that are part of important business processes, then you're going to lose the trust and confidence, and some of your future important cost benefit activities might be in jeopardy.
It's important to understand the code. You don't want to go and start monkeying around with and extracting code, unless you really know what you're doing. If you don't, it's important to get outside help.
There are people who are not doing this for the first time. They've done it many times. They're familiar with certain application types and platforms. It's better to let them come in, than for you to be a guinea pig yourself or take trials and tests as your first step. That's not a good idea when you're starting to deal with critical and important application.
Stick to processes and methods that do work. Don't recreate the wheel, unless you need to, and once you have got a good wheel creation method, repeat and verify.
You need to be rigorous, be systemic, and verify results, as we have said. That's what's going to get you those transformational benefits, rather than piecemeal benefits. You're going to see how application modernization fits into the context of these other activities, You're going to be well on the way to satisfying your constituencies, getting the funding you need, and then seeing more of your budget going to innovation and productivity and not to maintenance and upkeep.
There are a lot of great reasons to modernize, and we have mentioned a number of them. There are backwards and forwards compatibility issues. There are big payoffs in cost and agility, and now it's time to look at some of the examples of how this has been put into place.
Announcer: Thanks Dana. Now, we'll hear from David McFarlane, COO at Nexaweb, on some use-case scenarios for adopting and benefiting from these application modernization opportunities. Here is David McFarlane.
Understanding value
David McFarlane: We're going to go a little bit deeper and actually take a look at a case study of one of our clients, one of our successful implementations, and see the value that came out of it.
To really understand what value is, we have to understand how we're going to quantify it in the first place. We're probably all in IT here, and we're probably all IT heads, but we have to take a step back, take a top-down approach, and understand how we define that value in the business.
As Dana said earlier, application modernization impacts all areas of your business, and the three areas that it really impacts are business, operations, and IT. So, you have to step outside your role. You have to see what value the business would see out of it, what operations would see out of it, and also for yourself in IT, what gains and benefits you would get out of that. When you add them all together, you get the overall value for that application modernization.
Let's take a look at a real case study as an example. Just to set some background, we have a legacy system, a customer relationship management (CRM) call center application for one of our clients. They have about five call centers, with around 50 employees, and they're on a C++ client-server application.
The important thing to note about this is that, in legacy systems, there are usually multiple instances of this application. Since it's a client-server app, we have to remember that it's also deployed and managed on each individual desktop. Each individual employee has their own installation on their desktop, which is sometimes a nightmare to manage for most IT shops.
We took that system and built a modernized system with it. We had a J2EE architecture with desktop browser look and feel, as Dana talked about earlier. You get that real performance out of the installed client-server application, but it's delivered over the Web via zero client install.
You don't have to do anything besides update your Web server, and everybody automatically has the new application, the new look and feel, the new business logic, and access to whatever data you've hooked it up to on the backend.
Also important is the ability of our system that we modernized to be deployed as an open standard. We used J2EE architecture, and that means we're able to integrate with anything that you have on your back end via open Java application programming interfaces (APIs).
There is a vast array of open source products out there waiting to be used, to be integrated, and to modernize systems. There's also a large workforce that will be able to understand a Java application as opposed to a custom C++ application or even a COBOL application. We also consolidated it to one distributed instance, since we can now manage it centrally from one data center.
ROI analysis
When you're doing a modernization, you're probably going to have to do some sort of return on investmment (ROI) analysis to understand exactly what you're going to get out of this application, and that's going to take some time.
If you're coming from an IT perspective, you might have most of the benefits already in your head: "I'll have people using Java instead of COBOL. I'll have all the developers focused on one development language, instead of multiple development languages. I'm going to be able to decrease by deployment time, etc."
But, when justifying something like this, you need to take a step back, and as we said before, look at the factors in these three areas that are most affected by application modernization. As Dana pointed out, it's business operations in IT. So, we go ahead and look at the business.
We have to ask a few questions here: "Who are my users? How long does each transaction take?" Say I'm a call center and it takes few minutes for a user to get through a transaction, if I can cut that to one-and-a-half minutes or even one minute, I'm able to increase productivity significantly.
The next part is operations. How is that productivity increased, but also what does it mean to have a modern application infrastructure? If previously I had to come in to work and sit down at my desktop, because that's the only place that application was installed, maybe I don't even need to come in to work anymore. Maybe I can work from home. Maybe I can work from India, if I want to, as long as I have VPN access into that sort of an application. You can begin to see the operational flexibility that you get out of that.
Then, as we look into the IT benefits that we have here, how long did it take to make a change to my legacy system? One of the biggest benefits that we're seeing is when coming from legacy C++ PowerBuilder applications, where you really have to code each and every aspect of the user interface (UI), the business logic, and the specific data interaction, because we don't have SOA to leverage, and we don't have hooks into services that we've built or are planning to build in our application.
Also, we have to think of what the developer actually had to do to make that change. In older technologies, they might not have a way to prototype the UI and show the business users feedback before they are able to get sign off on what they're going to build. They might have to program each and every element of the user interface, all the way down to writing SQL stored procedures that are application-specific to a database.
Going to a modern architecture, you're going to have services and you're going to have your object-relational management capabilities. You're going to have some great middle-tier applications like Spring and Struts to enhance the development. Obviously, with Nexaweb technologies, you have that ability to create the declarative user interfaces, which speeds up UI development time significantly.
Also we have what hardware and software do the application run on, and what licenses am I paying for? As Dana pointed out earlier, you'll have a significant opportunity for maintenance savings, when you go to a modern architecture.
Productivity gains
We asked all these questions, and we found some significant areas of value in our CRM modernization case. In the business we actually saw a 15 percent gain in end-user productivity, which impacted our clients by about $1.5 million a year. In these times, you're actually able to slim down or trim your workflow if you have a more productive application. In this case, which are the productivities that are able to do more calls quicker, service customers quicker? Ultimately, that ends up in end user satisfaction and dollars saved as well.
Next, you have the operational value. What we had here was a decrease in audit time. We found that their auditors were going around to each individual desktop and seeing exactly which applications were installed on their computer. They had to look at each of the five instances in each call center for auditing, instead of looking at one consolidated instance, with just one database and book of record for all the operation there. So, that saved a lot of auditing time for them, which is really great.
Another thing was that it improved the performance of another help desk. This was a help desk for customer support, but the internal IT help desk actually saw huge improvement. Because the application was centrally managed, all people had to do was go to a Website or click a link to access that application, instead of having to install software. As you know, when you install software, a ton of things can happen. You actually have to do a lot of testing for that software as well. All that has been reduced, and we're saving about $15K there.
When you look at the IT benefits, we have that IT developer productivity gain that we talked about. We eliminated some hardware and software for those five instances and some of that maintenance cost. So, that's and $85K impact. There are the deployment benefits of a RIA, when you're going from deploying applications on 250 computers to zero computers. You're going to see an immediate impact there, and that was around $250K for the time to do that, the software that it took to push that out, and for the support that it needed to run.
Because of the change management benefits from RIAs, the development productivity, and the ability to go from requirements to design, to testing, to production much more quickly than a client-server application, we're able to see a 90 percent gain there, which had a $200K impact.
When you look at it in total, the yearly bottom line improvement was about $2.23 million for this one instance, with one time improvement of $85K for the hardware and the software that we got rid of. It was only a one-time investment of about $800K.
I say "only," but if you look at the business, operational, and the IT impacts together, you get payback in the first full year. If you were only coming from that IT perspective, you would have seen that the payback is actually a little bit longer than a year.
If you add all those numbers up, you get something a little less than $800K, about $700K, I believe. That will be about 14- or 15-month payback instead of about a 5- or 6-month payback. When you're trying to make a case for modernization, this is exactly what your CFO or your CEO needs to know -- how it affects your bottom line from all areas of the business, not just IT.
Let's not forget the intangibles that come with application modernization. It's always about the bottom line. There are some great things that you get out of a modern application infrastructure, and the first thing you get, when you look at the business, is improved response time.
Happier CSRs
The number one thing I could think of is that your customer service representatives (CSRs) are going to be happier, because once they click a button, it's not going to take two seconds to respond like the old application. It's going to be fast. It's going to be rich. You're not going to have any installation issues when you do an upgrade. It's going to be smooth.
You're going to have happier CSRs. Having happier CSRs means that you're going to have improved customer service, and a customer satisfaction level, when people get calls through quicker, and when people talk to happy customer service representative.
Also, when you're doing application modernization, you have a good opportunity to automate manual portions of the business process. You can go in and say, "This person is cutting and pasting something into an Excel spreadsheet, and emailing this to somebody else as a report after they're done." Maybe there's an opportunity there to have that done automatically. So, it saves them time again. That's where you can really find your increased productivity.
When we look at operations, we actually enabled real estate consolidation. I didn't put those numbers in the ROI, because they were probably going to do that anyway, but it was an enabler. Having a technology to go from five call centers to one call center with distributed agents across the country and across the world saves the business a lot of money on the real estate, the power, and the infrastructure needed to have five call centers up and running.
Again, you get the workforce flexibility, because I can work from home, work from India, or come and work from the office. I could do this job from anywhere, if I have access to this application. Obviously, we're able to bring outsourced call centers online on-demand with that.
Then, we move on to IT. As I said before, it's short release cycles with more functionality. When release cycles are shorter, you can incrementally build in more features to your application, make people more productive, and make the application do more in less time, which is obviously what we want to do.
We have a standardized J2EE architecture, which means the people that you're going to look for to maintain the application are going to be out there. There is a huge number of Java developers out there waiting and ready to come in to maintain your application.
We're built on open standards to ensure that the application is ready for the future. There are a lot of RIA technologies that try to lock you in to one runtime and one development methodology. We use open standards as much as we can to push your application out the door as fast as possible, and be as maintainable as possible, and as ready for the future as possible.
Announcer: Thanks, David. Now, we'll hear from Adam Markey, solution architect at Nexaweb, on specific deployment examples of application modernization projects. Here, then, is Adam.
Enterprise-wide value
Adam Markey: As we look at these different customer examples, we really want to see how they've had an impact of value across the enterprise, and see, from a business point of view, the ability to be able to increase market reach, improve user productivity, decrease the time to market, increase customer engagement and loyalty, and sustain, if not build upon, that competitive advantage.
We also want to look at the operations as well and understand how this new architecture has actually realized benefits in terms of a reduced real estate, greater utilization of global workforce, reduction in energy, moving to green architectures, and improving the overall vendor management.
For those closely responsible for the organization and who deliver this capability, we want to look at IT and how this process helps deal with the rapidly changing demographics in the IT skills market. As the baby boomers move on and out of or off the job market, many of the legacy skills that we relied on so heavily through the years are becoming very rare and hard to find within the organization.
We'll take a look at that process efficiency, and generally how we can improve the overall efficiency and cost in terms of licenses and the use of open source. So, let's take a closer look at a few examples to help illustrate that. There's nothing wrong with your screens here. This first example is actually an example of the modernization of a Japanese phone exchange trading platform.
In this case, this was a trading platform built by Enfour, Bank of Tokyo-Mitsubishi (BTM). The challenge that BTM had was that, once they were capable of satisfying their large corporate customers with their on-premises foreign exchange trading platforms, the small- and medium-sized enterprises (SMEs) were quite different in terms of what they required.
They needed a UI and an application that was much simpler for them to adopt. They didn't have the necessary IT infrastructure to be able to establish the complex on-premises systems. They needed something that had no IT barriers to adoption.
What we did for BTM with our partner Hitachi was to help modernize and transform the entire trading platform to the Web. Just to stress, this isn't simply an information portal, this is a fully functioning trading platform. There are over 500 screens. It's integrated to a 120 different data sources with very stringent service-level requirements to the deployment of the application.
We needed to be able to display any fluctuations and exchange right from the Reuters feed in 200 milliseconds or less. We needed to be able to complete a close loop transaction in two seconds or less.
So, this is a fully functioning trading platform. What's it's meant for BTM is that they've been able to dramatically increase the adoption and penetration into the SME market. Fundamentally, these SME or institutional traders don't need any architecture whatsoever, just a browser. There is no client installation. They're able to self-serve, which means they can simply enter the URL, log in, and get started. This has been a tremendous cost reduction and also revenue growth for this product line in penetrating this new market segment.
In the same field of foreign exchange trading, we were able to help a number of Japanese banks take their products and services global. Traditionally, the market had been very service-intensive through a call center. You dialed in and placed your trade with the trader over the phone. By being able to move this entire platform to the Web, we allowed them to go global and go 24/7.
Now, we have over 30,000 institutional traders using this trading platform and application to self-serve through operations, not just in Tokyo, but in Singapore, London, New York, Frankfurt, literally around the world.
New capabilities
Not only has it extended the product line with very little additional operational cost to the banks, but it's also allowed them to provide new capabilities to those customers. One, for example, is the ability to be able to run continuous global book.
In the traditional implementations of trading platforms, each one would be an on-premises installation, which meant that each region would actually have to close their books and close out their operations at the end of their working day. Because it's now managed and provisioned in system, it can actually run globally, and allows them to maintain those books, and maintain common alerts across entities that themselves have a global footprint.
Not only were we getting them to a new market, but we were also allowing them to introduce new functionality. It allowed them to interact more closely with the customers, providing real-time chat facilities, and allowing the traders in Japan to interact directly with a trader as they exhibited certain behavior. It allowed them to offer custom contracts and has significantly increased the close rate of those applications.
So, a big impact in terms of market reach for the banks in Japan is one example. Let's take a look here at how we've been able to dramatically improve user productivity and dramatically reduce the business process time for large organizations.
This is a representation for one of the largest telecommunications groups in Europe. The challenge that they were facing is that they had a request for proposal (RFP) process that was very complicated. They had to be able to provide quotations for countrywide mobile platforms, a very large, complex design process, which was performed through one application, one legacy application as a product configurator.
Then, they would go to another application for doing the parts costing and bill of material assessment, another application for the pricing, and finally, an overall RFP approval process for these large $100 million-plus projects running over 10 years.
The whole process was taking them anywhere up to four weeks. It was fragmented. It was error prone. There were spreadsheets, and the files were flying around the globe, trying to complete this process.
What we were able to do for this organization was to streamline the process and present it as a single-branded Web-based workflow that brought all the different elements together, and, most importantly, ran on top of a SAP NetWeaver infrastructure. In fact, the workflow was designed to have an SAP look and feel.
End users didn't know when they were in or outside of SAP. They didn't care and they didn't need to, because as an end-to-end process, SAP acts as the overall system of record, providing a much higher degree of control, accuracy, and a dramatic reduction in errors.
The great result, from a user productivity point of view, is that they've been able to go from a process that took four weeks to a process that now takes four hours or even less -- a dramatic reduction. More important was the ability to increase the accuracy of these processes.
Desktop-like experience
These Web applications, I should stress, are really a desktop-like experience for the end user. We think of them and talk about them as a desktop in a browser. Everything that you could do as a desktop application with all the user navigation and productivity in very intense data environments, you can do in a browser-based application as deployed in this solution.
Let's take another look at another example where Web architecture and rich Web interfaces allowed us to dramatically improve customer loyalty and customer engagement.
You maybe familiar with the concept of the extended enterprise, whereby more and more organizations need to be able to open up traditionally back-office processes, and back-office systems still managed on the sort of green screen UIs in the bowels of the company. In order to be able to truly engage their customers and improve the process flow, more and more of those systems are being opened up and presented to their customers through rich, engaging Web applications.
This is an example of that. This is a company in the Netherlands called Waterdrinker, which is actually the largest flower distributor in Europe, a very significant business for them. We were helping them to create a Web-based, self-service ordering process that dramatically reduces the dependency on the use of customer service reps. It was similar to the scenario for the foreign-exchange trading platform. We were actually migrating customer interaction to the self-served Web platforms without the need for human intervention and costly CSRs.
But, it's much more than that. We're providing a much richer experience for the user, a much more engaging experience for the user, where they're able to more dynamically navigate through the catalog and determine the optimal order for them with all kinds of what-if calculations and analysis that are provided for them in real time at their own discretion.
The net result has been a significant increase in customer satisfaction, engagement, loyalty, We're yet to see it, because it's still relatively new, but just based on the amount of response reaction and conversion that we have seen through these Web-based interfaces, loyalty benefits will follow soon after. In addition, with a Web-based UI, you're able to easily and effectively customize the user interface with different users and communities.
In this case, they're able to provide a custom UI solution that integrates their catalog ordering process into their partners' processes. They distribute through local partners and local Websites, and they're able to provide this architecture as a white-label capability and then brand it according to the local distributor, delivering a rich branded experience through their partner.
Let's talk generally about competitive advantage. Obviously, all those things that we have talked about and shown with regard to different customers, and Dana has talked about in aggregate, offer all kinds of competitive advantage.
But, there's a certain element to competitive advantage that I would like to emphasize in this transformation process. Organizations, through the years, have basically instantiated and codified their best practices in the workflows within those legacy systems. Those business rules represent years of intelligence and competitive intelligence, and often the point at which you can realize tremendous competitive advantage.
Razor-thin margins
This is never truer than in the razor-thin margins of the consumer packaged goods (CPG) business, where a lot of the margin for a customer can actually be determined through the appropriate inventory, logistics, and pricing management, literally as goods are on route. What we've done for customers like these is to enable them to quickly and effectively extract the business rules that are buried in the legacy systems.
Frankly, nobody knows how they work anymore. They're not really very well documented at best, but we have allowed them to extract those business rules that represent the competitive advantage and consolidate them into a set of corporate-wide rules that can be more effectively managed.
One issue in a traditional legacy environment is that, as you establish business rules in terms of the legacy implementation, each one is monolithic. They start to create their own derivatives, as people program, tweak, and modify. At the end of a 10-year process, the rules barely resemble each other in each of the iterations.
In our transformed architecture, we're able to provide an environment, in which you can centrally manage, control, and modify those business rules and have them consistently and immediately applied across all the necessary touch points. Through this process, we can dramatically reduce human error.
This architecture allows us to provide support tools and business rules in a form that's readily accessible to the end user. You might say, "Wait a minute. It's a Web-based application, and when I'm sitting face to face with my customers, I'm not going to have access to the Web."
As you would expect in these solutions, we're able to architect them, so that the same application can be deployed as a Web application, or used as standalone. A great example of that is Aflac, where we created their premium calculation solution that is basically used across all the customer touch points, 38,000 users. And, 6,000 of those are agents who go door-to-door.
Part of the architecture and part of the challenge was to deliver that insurance calculation solution in such a way that when the agent is sitting across the kitchen table from their customer, they could still perform a level of custom quotation. They could produce the necessary documentation to be able to close the customer there and then as a standalone laptop with a local printer right across the kitchen table. That's all part of bringing those business rules that represent the years of competitive advantage successfully to the Web.
Let's take a look at how some of these capabilities impact the operations themselves. Here, we'll take an example of a call-center application. This was a transformation for the Pepsi bottling group of their customer-equipment tracking system It was a PowerBuilder application, maybe 10 years old, that we successfully moved to the Web.
The real business value in this is that by doing this, by creating a Web-based environment that could be deployed in any call center, we provide the flexibility and the agility for organizations to better utilize those call centers and better utilize that real estate, often consolidating from a large number of call centers to a smaller set of agile call centers, where they can put a lot of different processes through the same infrastructure.
Cost-management advantage
This has tremendous advantages, as you can imagine, in terms of cost management for those customers. We're even able to take that to the next step with the advent of voice-based telephony. It's now possible to engage home-office operators through a voice over Internet protocol (VoIP) infrastructure.
Those operators can not only have the benefit of the call center application as a Web based application accessible through their home broadband, but actually can have the same level of computer telephony integration (CTI) that they would have had, if they sat in the call center, by virtue of a series of VoIP based CTI technology that's available.
This is offering tremendous operating improvements in terms of, for example, real-estate consolidation. Also, looking at operations and the ability to optimize the use of the workforce, we have a situation here where we deploy a very complex laboratory information-management solution for the AmeriPath, now part of Quest Diagnostics. This is part of a pathology services process that requires very experienced technicians to participate.
The joy of being able to deploy this as a Web-based application means that you get great skills mobility, which means that technicians from anywhere, provided they have Web access, can actually participate in a diagnostic process, without the need to move the sensitive Health Insurance Portability and Accountability Act (HIPAA) data. So, HIPAA data that has to be stored in one place, can be made accessible to technicians through any location who can participate then in a process 24/7.
The value to IT is manifold. We'll take a quick look at some of those before we jump into the value equation itself. This is an example with SunGard Shareholder Systems, where they wanted to modernize their commercial product line, a 401k management application. I'm sure they're pretty busy these days.
It was originally deployed as an IBM-Oracle mainframe solution with a C++ front end. We modernized it through a pure Web application, but, from an IT development point of view, the benefits of being in that Web architecture are manifold. First and foremost, they were able to manage this entire development process with one person in the US, and a whole development team offshore in India, dramatically reducing the time and cost.
In this new architecture, the ability to respond to program change requests is tremendously different. We're able to program and change requests in one-tenth of the time and, by virtue of being a Web architecture, actually deploy those in now what are weekly release cycles, instead of six-monthly cycles that you would typically see as a point solution.
As we're running a little long here, I won't go into all of these, but there are many different ways in which the modern architecture really played into creating significant additional IT value.
We provide a process we call Nexaweb Advance, which is an end-to-end transformation process that allows us to dramatically reduce the time, risk, and costs of this overall implementation. It starts with a capture phase that is able to go in and interrogate legacy systems and dramatically reduce the amount of time and effort to document the code that typically is not well documented.
Then it goes through a model transformation process that dramatically reduces the amount of actual code that has to be written. In this example, it was a 65 percent reduction in the amount of code in the three million lines of application. The net result of that is through a typical designer development cycle, we were able to realize 50 percent or more reduction in the development time.
Having done that as a Web based application, there is no kind installation, no on-site provisioning. It's all centrally managed, so dramatic reductions in operating costs recognized by customers. In the example that we shared with you a little bit earlier, where, because we're in a modern object-oriented architecture with all the inheritance benefits that that brings, we're actually able to modify and execute change requests quite often in one-tenth of the time and then deploy them immediately and effectively as Web applications.
Announcer: Thanks, Adam. With that we conclude our podcast. You have been listening to a sponsored BriefingsDirect presentation taken from a recent Nexaweb webinar on application modernization. Please find more information on these solutions at Nexaweb.com. Thanks for listening and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.
Transcript of a BriefingsDirect webinar with David McFarlane and Adam Markey on the economic and productivity advantages from application modernization. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.
Announcer: Hello, and welcome to a special BriefingsDirect presentation, a podcast created from a recent Nexaweb Technologies Webinar on application modernization.
The webinar examines how enterprises are gaining economic and productivity advantages from modernizing legacy and older client-server applications. The logic, data, and integration patterns' value within these older applications can be effectively extracted and repurposed using tools and methods, including those from Nexaweb. That means the IT and business value from these assets can be reestablished as Web applications on highly efficient platforms.
We'll learn how Nexaweb has worked with a number of companies to attain new value from legacy and client-server applications, while making those assets more easily deployed as rich, agile Web applications and services. Those services can then be better extended across modern and flexible business processes.
On this podcast, we'll hear from Dana Gardner, principal analyst at Interarbor Solutions, as well as David McFarlane, COO at Nexaweb, and then Adam Markey, solution architect at Nexaweb.
First, welcome our initial presenter, BriefingsDirect's Dana Gardner.
Dana Gardner: We're dealing with an awful lot of applications out there in the IT world. It's always astonishing to me, when I go into enterprises and ask them how many applications they have in production, that in many cases they don't know. In the cases where they do know, they're usually off by about 20 or 30 percent, when they go in and do an audit.
In many cases, we're looking at companies that have been around for a while with 10 or 20 years worth of applications. These can be on mainframe. They can be written in COBOL. They could be still running on Unix platforms. In a perfect world we'd have an opportunity to go in and audit these, sunset some, and re-factor others.
Today, however, many organizations are faced with manpower and labor issues. They've got skill sets that they can't bring in, even if they wanted to, for some of these older applications. There is, of course, a whole new set of applications that might not be considered legacy, but that are several years old now. These are N-tier and Java, distributed applications, .NET, COM, DCOM, a whole stew in many organizations.
What I am asking folks to do, now that we're into a situation where economics are probably more prominent than ever -- not that that's not usually the case in IT -- is to take a look at what applications are constraining their business. Not so much to worry about what the technology is that they are running on or what the skill sets are, but to start factoring what new initiatives they need to do and how can they get their top line and bottom line as robust as possible? How do they get IT to be an enabler and not viewed as a cost center?
This is really where we should start thinking about modernizing and transforming IT -- getting application functionality that is essential, but is in someway handicapping what businesses want to do.
We want to exploit new architectures and bring more applications into association with them. It's not just architectures in terms of technology, but approaches and methodologies like service-oriented architecture (SOA), or what some people call Web-oriented architecture (WOA), looking to take advantage of interfaces and speed of innovation so that organizations can start to improve productivity for their internal constituents, in this case usually employees or partners.
Then, increasingly because of the difficulty in bringing about new business during a period of economic downturn, they're reaching out through the Internet, reaching out through the channels that are more productive, less costly and utilizing applications to focus on new business in new ways.
SOA and mobile devices
Increasingly, as I mentioned, this involves SOA, but it also increasingly involves mobile. We need to go out and reach people through their mobile Internet devices, through their iPhone and their BlackBerry, and a host of other devices at the edge. You need to be able to do that with applications and you need to be able to do it fast.
So, the goal is flexibility in terms of which applications and services need to reach new and older constituencies at less cost and, over time, reduce the number of platforms that you are supporting, sunset some apps, bring them into a new age, a new paradigm, and reduce your operating costs as a result.
Information really is the goal here, even though we are, with a handful of applications, starting to focus on the ones that are going to give us the biggest bang for the buck, recognizing that we need to go in and thoughtfully approach these applications, bring them into use with other Web services and Web applications, and think about mashups and Enterprise 2.0 types of activities. That involves expanding the use of these new methodologies.
One of the things that's interesting about companies that are aggressively using SOA is they also happen to be usually aggressive in using newer development platforms and tools. They're using dynamic languages, Web interfaces, and rich Internet application (RIA) interfaces. This is what's allowing them to take their newer applications and bring them into a services orientation reuse. Some of those services can be flexible and agile.
That's not to say you can't do some of those things with the older applications as well. In many cases, tools are being brought about and third-party inputs, in terms of professional services and guidance, are coming around. I'm recommending to people to respond more quickly, to save operational costs, to get agile and reach out through these new edge devices and/or the Internet, and do it in a fairly short order.
It's amazing to me that for those companies that have moved in this direction, they can get applications out the door in weeks rather than months, and in many cases, you can transform and modernize older applications on aging platforms just as quickly.
We want to move faster. We want to recognize that we need a higher payoff, because we also recognize that the line-of-business people, those folks that are tasked with developing new business or maintaining older business, are in a rush, because things are changing so quickly in the world around us. They often need to go at fast-break or breakneck speed with their business activities. They're going to look at IT to be there for them, and not be a handicap or to tell them that they have to wait in line or that this project is going to be six to eight months.
So, we need to get that higher agility and productivity, not just for IT, but for the business goals. Application modernization is an important aspect of doing this.
How does modernization fit in? It's not something that's going to happen on its own, obviously. There are many other activities, approaches, and priorities that IT folks are dealing with. Modernizing, however, fits in quite well. It can be used as a way to rationalize any expenditure around modernization, when you factor in that you can often cut your operating costs significantly over time.
You can also become greener. You can use less electricity, because you're leveraging newer systems and hardware that are multi core and designed to run with better performance in terms of heat reduction. There are more options around cloud computing and accessing some services or, perhaps, just experimenting with application development and testing on someone else's infrastructure.
By moving towards modernization you also set yourself up to be much more ready for SOA or to exploit those investments you have already made in SOA.
Compliance benefits
There are also compliance benefits for those organizations that are facing payment-card industry (PCI) standards in financial or other regulatory environments, freeing up applications in such a way that you can develop reports, share the data, and integrate the data. These are all benefits to your compliance issues as well.
As I mentioned earlier, by moving into a modernization for older applications, you've got the ability to mash up and take advantage of these newer interfaces, reuse, and extended application.
There is a whole host of rationalizations and reasons to do this from an IT perspective. The benefits are much more involved with these business issues and developer satisfaction, recognizing that if you are going to hire developers, you are going to be limited in the skill sets. You want to find ones that are able to work with the tools and present these applications and services in the interfaces that you have chosen.
Keeping operations at a lower cost, again, is an incentive, and that's something they can take out to their operating and financial officers and get that backing for these investments to move forward on application modernization and transformation.
One of the questions I get is, "How do we get started? We've identified applications. We recognized the business agility benefits. Where do we look among those applications to start getting that bang for the buck, where to get modern first?"
Well, you want to look at applications that are orphans in some respect. They're monolithic. They're on their own -- dedicated server, dedicated hardware, and dedicated stack and runtime environment, just for a single application.
Those are good candidates to say, "How can we take that into a virtualized environment?" Are there stacks that can support that same runtime environment on a virtualized server, reduce your hardware and operating costs as a result? Are they brittle?
Are there applications that people have put a literal and figurative wall around saying, "Don't go near that application. If we do anything to it, it might tank and we don't have the documentation or the people around to get it back into operating condition. It's risky and it's dangerous."
Conventional wisdom will say don't go near it. It's better to say, "Listen, if that's important to our business, if it's holding our business back, that's a great target for going in and finding a way to extract the logic, extract the data and present it as something that's much more flexible and easy to work with."
You can also look for labor issues. As I said, if skills have disappeared, why wait for the proverbial crash and then deal with it? It's better to be a little bit proactive.
We also need to look at what functional areas are going to be supporting agility as these business requirements change. If you're an organization where you've got supply chain issues, you need to find redundancy. You need to find new partners quickly. Perhaps some have gone out of business or no longer able to manufacture or supply certain parts. You need to be fleet and agile.
If there are applications that are holding you back from being able to pick and choose in a marketplace more readily, that's a functional area that's a priority for getting out to a Web interface.
Faster, better, cheaper
People are going to be looking to do things faster, better, cheaper. In many cases those innovative companies that are coming to market now are doing it all through the Web, because they are green-field organizations themselves. They are of, for, and by the Web. If you're going to interact with them and take advantage of the cost, innovation, and productivity benefits they offer, your applications need to interrelate, operate, and take advantage of standards and Web services to play with them.
You also need to take a look at where maintenance costs are high. We've certainly seen a number of cases where by modernizing applications you have reduced your cost on maintenance by 20 or 30 percent, sometimes even more. Again, if this is done in the context of some of these larger initiatives around green and virtualization, the savings can be even more dramatic.
I also want to emphasize -- and I can't say it enough -- those SOA activities shouldn't be there for just the newer apps. The more older apps you bring in, the more return on investment you get for your platform modernization investments, as well as saving on the older platform costs, not to mention those productivity and agility benefits.
We also need to think about the data. In some cases, I have seen organizations where they have applications running and aren't really using the application for other than as an application repository for the data. They have a hard time thinking about what to do with the data. The application is being supported at high cost, and it's a glorified proprietary database, taking up app server and rack space.
If you're looking at applications that are more data centric in their usage, why not extract that data, find what bits of the logic might still be relevant or useful, put that into service orientation, and reduce your cost, while extending that data into new processes and new benefits.
It's also important to look at where the technical quality of an app is low. Many companies are working with applications that were never built very well and never performed particularly well, using old kludgy interfaces. People are not as productive and sometimes resist working with them. These are candidates for where to put your wood behind your arrow when it comes to application modernization.
In beginning the process, we need to look at the architecture targets. We need to think about where you're going to put these applications if you are refactoring them and bringing them into the Web standards process.
It's important to have capacity. We want to have enough architecture, systems, and runtime in place. We should think about hosting or collocation, where you can decrease your cost and the risk of capital expenditure, but at the same time, still have a home for these new apps.
You certainly don't want to overextend and build out platforms without the applications being ready. It's a bit of a balancing act -- making sure you have enough capacity, but at the same time performing these modernization transformation tasks. You certainly don't want to transform apps and not have a good home for them.
Also important is an inventory of these critical apps, based on some of the criteria, we have gone through.
Crawl, walk, run
The nice thing about creating the categorization is that once you've got some processes in place on how to go about this, with one application you can extend that to others. The crawl-walk-run approach makes a great deal of sense, but when you've learned to crawl well, extend and reuse that to walk well, and then scale it from there.
This construction, deconstruction, rationalization process should also be vetted and audited in the sense that you can demonstrate paybacks. We don't want to demonstrate cost centers becoming larger cost centers. We want to show, at each step of the way, how this is beneficial in cost as well as productivity. Then, we need to focus continually on these business requirements, to make a difference and enhance these business processes.
There are some traps. It's easier said than done. It's complicated. You need to extract data carefully. If you start losing logic and access to data that are part of important business processes, then you're going to lose the trust and confidence, and some of your future important cost benefit activities might be in jeopardy.
It's important to understand the code. You don't want to go and start monkeying around with and extracting code, unless you really know what you're doing. If you don't, it's important to get outside help.
There are people who are not doing this for the first time. They've done it many times. They're familiar with certain application types and platforms. It's better to let them come in, than for you to be a guinea pig yourself or take trials and tests as your first step. That's not a good idea when you're starting to deal with critical and important application.
Stick to processes and methods that do work. Don't recreate the wheel, unless you need to, and once you have got a good wheel creation method, repeat and verify.
You need to be rigorous, be systemic, and verify results, as we have said. That's what's going to get you those transformational benefits, rather than piecemeal benefits. You're going to see how application modernization fits into the context of these other activities, You're going to be well on the way to satisfying your constituencies, getting the funding you need, and then seeing more of your budget going to innovation and productivity and not to maintenance and upkeep.
There are a lot of great reasons to modernize, and we have mentioned a number of them. There are backwards and forwards compatibility issues. There are big payoffs in cost and agility, and now it's time to look at some of the examples of how this has been put into place.
Announcer: Thanks Dana. Now, we'll hear from David McFarlane, COO at Nexaweb, on some use-case scenarios for adopting and benefiting from these application modernization opportunities. Here is David McFarlane.
Understanding value
David McFarlane: We're going to go a little bit deeper and actually take a look at a case study of one of our clients, one of our successful implementations, and see the value that came out of it.
To really understand what value is, we have to understand how we're going to quantify it in the first place. We're probably all in IT here, and we're probably all IT heads, but we have to take a step back, take a top-down approach, and understand how we define that value in the business.
As Dana said earlier, application modernization impacts all areas of your business, and the three areas that it really impacts are business, operations, and IT. So, you have to step outside your role. You have to see what value the business would see out of it, what operations would see out of it, and also for yourself in IT, what gains and benefits you would get out of that. When you add them all together, you get the overall value for that application modernization.
Let's take a look at a real case study as an example. Just to set some background, we have a legacy system, a customer relationship management (CRM) call center application for one of our clients. They have about five call centers, with around 50 employees, and they're on a C++ client-server application.
The important thing to note about this is that, in legacy systems, there are usually multiple instances of this application. Since it's a client-server app, we have to remember that it's also deployed and managed on each individual desktop. Each individual employee has their own installation on their desktop, which is sometimes a nightmare to manage for most IT shops.
We took that system and built a modernized system with it. We had a J2EE architecture with desktop browser look and feel, as Dana talked about earlier. You get that real performance out of the installed client-server application, but it's delivered over the Web via zero client install.
You don't have to do anything besides update your Web server, and everybody automatically has the new application, the new look and feel, the new business logic, and access to whatever data you've hooked it up to on the backend.
Also important is the ability of our system that we modernized to be deployed as an open standard. We used J2EE architecture, and that means we're able to integrate with anything that you have on your back end via open Java application programming interfaces (APIs).
There is a vast array of open source products out there waiting to be used, to be integrated, and to modernize systems. There's also a large workforce that will be able to understand a Java application as opposed to a custom C++ application or even a COBOL application. We also consolidated it to one distributed instance, since we can now manage it centrally from one data center.
ROI analysis
When you're doing a modernization, you're probably going to have to do some sort of return on investmment (ROI) analysis to understand exactly what you're going to get out of this application, and that's going to take some time.
If you're coming from an IT perspective, you might have most of the benefits already in your head: "I'll have people using Java instead of COBOL. I'll have all the developers focused on one development language, instead of multiple development languages. I'm going to be able to decrease by deployment time, etc."
But, when justifying something like this, you need to take a step back, and as we said before, look at the factors in these three areas that are most affected by application modernization. As Dana pointed out, it's business operations in IT. So, we go ahead and look at the business.
We have to ask a few questions here: "Who are my users? How long does each transaction take?" Say I'm a call center and it takes few minutes for a user to get through a transaction, if I can cut that to one-and-a-half minutes or even one minute, I'm able to increase productivity significantly.
The next part is operations. How is that productivity increased, but also what does it mean to have a modern application infrastructure? If previously I had to come in to work and sit down at my desktop, because that's the only place that application was installed, maybe I don't even need to come in to work anymore. Maybe I can work from home. Maybe I can work from India, if I want to, as long as I have VPN access into that sort of an application. You can begin to see the operational flexibility that you get out of that.
Then, as we look into the IT benefits that we have here, how long did it take to make a change to my legacy system? One of the biggest benefits that we're seeing is when coming from legacy C++ PowerBuilder applications, where you really have to code each and every aspect of the user interface (UI), the business logic, and the specific data interaction, because we don't have SOA to leverage, and we don't have hooks into services that we've built or are planning to build in our application.
Also, we have to think of what the developer actually had to do to make that change. In older technologies, they might not have a way to prototype the UI and show the business users feedback before they are able to get sign off on what they're going to build. They might have to program each and every element of the user interface, all the way down to writing SQL stored procedures that are application-specific to a database.
Going to a modern architecture, you're going to have services and you're going to have your object-relational management capabilities. You're going to have some great middle-tier applications like Spring and Struts to enhance the development. Obviously, with Nexaweb technologies, you have that ability to create the declarative user interfaces, which speeds up UI development time significantly.
Also we have what hardware and software do the application run on, and what licenses am I paying for? As Dana pointed out earlier, you'll have a significant opportunity for maintenance savings, when you go to a modern architecture.
Productivity gains
We asked all these questions, and we found some significant areas of value in our CRM modernization case. In the business we actually saw a 15 percent gain in end-user productivity, which impacted our clients by about $1.5 million a year. In these times, you're actually able to slim down or trim your workflow if you have a more productive application. In this case, which are the productivities that are able to do more calls quicker, service customers quicker? Ultimately, that ends up in end user satisfaction and dollars saved as well.
Next, you have the operational value. What we had here was a decrease in audit time. We found that their auditors were going around to each individual desktop and seeing exactly which applications were installed on their computer. They had to look at each of the five instances in each call center for auditing, instead of looking at one consolidated instance, with just one database and book of record for all the operation there. So, that saved a lot of auditing time for them, which is really great.
Another thing was that it improved the performance of another help desk. This was a help desk for customer support, but the internal IT help desk actually saw huge improvement. Because the application was centrally managed, all people had to do was go to a Website or click a link to access that application, instead of having to install software. As you know, when you install software, a ton of things can happen. You actually have to do a lot of testing for that software as well. All that has been reduced, and we're saving about $15K there.
When you look at the IT benefits, we have that IT developer productivity gain that we talked about. We eliminated some hardware and software for those five instances and some of that maintenance cost. So, that's and $85K impact. There are the deployment benefits of a RIA, when you're going from deploying applications on 250 computers to zero computers. You're going to see an immediate impact there, and that was around $250K for the time to do that, the software that it took to push that out, and for the support that it needed to run.
Because of the change management benefits from RIAs, the development productivity, and the ability to go from requirements to design, to testing, to production much more quickly than a client-server application, we're able to see a 90 percent gain there, which had a $200K impact.
When you look at it in total, the yearly bottom line improvement was about $2.23 million for this one instance, with one time improvement of $85K for the hardware and the software that we got rid of. It was only a one-time investment of about $800K.
I say "only," but if you look at the business, operational, and the IT impacts together, you get payback in the first full year. If you were only coming from that IT perspective, you would have seen that the payback is actually a little bit longer than a year.
If you add all those numbers up, you get something a little less than $800K, about $700K, I believe. That will be about 14- or 15-month payback instead of about a 5- or 6-month payback. When you're trying to make a case for modernization, this is exactly what your CFO or your CEO needs to know -- how it affects your bottom line from all areas of the business, not just IT.
Let's not forget the intangibles that come with application modernization. It's always about the bottom line. There are some great things that you get out of a modern application infrastructure, and the first thing you get, when you look at the business, is improved response time.
Happier CSRs
The number one thing I could think of is that your customer service representatives (CSRs) are going to be happier, because once they click a button, it's not going to take two seconds to respond like the old application. It's going to be fast. It's going to be rich. You're not going to have any installation issues when you do an upgrade. It's going to be smooth.
You're going to have happier CSRs. Having happier CSRs means that you're going to have improved customer service, and a customer satisfaction level, when people get calls through quicker, and when people talk to happy customer service representative.
Also, when you're doing application modernization, you have a good opportunity to automate manual portions of the business process. You can go in and say, "This person is cutting and pasting something into an Excel spreadsheet, and emailing this to somebody else as a report after they're done." Maybe there's an opportunity there to have that done automatically. So, it saves them time again. That's where you can really find your increased productivity.
When we look at operations, we actually enabled real estate consolidation. I didn't put those numbers in the ROI, because they were probably going to do that anyway, but it was an enabler. Having a technology to go from five call centers to one call center with distributed agents across the country and across the world saves the business a lot of money on the real estate, the power, and the infrastructure needed to have five call centers up and running.
Again, you get the workforce flexibility, because I can work from home, work from India, or come and work from the office. I could do this job from anywhere, if I have access to this application. Obviously, we're able to bring outsourced call centers online on-demand with that.
Then, we move on to IT. As I said before, it's short release cycles with more functionality. When release cycles are shorter, you can incrementally build in more features to your application, make people more productive, and make the application do more in less time, which is obviously what we want to do.
We have a standardized J2EE architecture, which means the people that you're going to look for to maintain the application are going to be out there. There is a huge number of Java developers out there waiting and ready to come in to maintain your application.
We're built on open standards to ensure that the application is ready for the future. There are a lot of RIA technologies that try to lock you in to one runtime and one development methodology. We use open standards as much as we can to push your application out the door as fast as possible, and be as maintainable as possible, and as ready for the future as possible.
Announcer: Thanks, David. Now, we'll hear from Adam Markey, solution architect at Nexaweb, on specific deployment examples of application modernization projects. Here, then, is Adam.
Enterprise-wide value
Adam Markey: As we look at these different customer examples, we really want to see how they've had an impact of value across the enterprise, and see, from a business point of view, the ability to be able to increase market reach, improve user productivity, decrease the time to market, increase customer engagement and loyalty, and sustain, if not build upon, that competitive advantage.
We also want to look at the operations as well and understand how this new architecture has actually realized benefits in terms of a reduced real estate, greater utilization of global workforce, reduction in energy, moving to green architectures, and improving the overall vendor management.
For those closely responsible for the organization and who deliver this capability, we want to look at IT and how this process helps deal with the rapidly changing demographics in the IT skills market. As the baby boomers move on and out of or off the job market, many of the legacy skills that we relied on so heavily through the years are becoming very rare and hard to find within the organization.
We'll take a look at that process efficiency, and generally how we can improve the overall efficiency and cost in terms of licenses and the use of open source. So, let's take a closer look at a few examples to help illustrate that. There's nothing wrong with your screens here. This first example is actually an example of the modernization of a Japanese phone exchange trading platform.
In this case, this was a trading platform built by Enfour, Bank of Tokyo-Mitsubishi (BTM). The challenge that BTM had was that, once they were capable of satisfying their large corporate customers with their on-premises foreign exchange trading platforms, the small- and medium-sized enterprises (SMEs) were quite different in terms of what they required.
They needed a UI and an application that was much simpler for them to adopt. They didn't have the necessary IT infrastructure to be able to establish the complex on-premises systems. They needed something that had no IT barriers to adoption.
What we did for BTM with our partner Hitachi was to help modernize and transform the entire trading platform to the Web. Just to stress, this isn't simply an information portal, this is a fully functioning trading platform. There are over 500 screens. It's integrated to a 120 different data sources with very stringent service-level requirements to the deployment of the application.
We needed to be able to display any fluctuations and exchange right from the Reuters feed in 200 milliseconds or less. We needed to be able to complete a close loop transaction in two seconds or less.
So, this is a fully functioning trading platform. What's it's meant for BTM is that they've been able to dramatically increase the adoption and penetration into the SME market. Fundamentally, these SME or institutional traders don't need any architecture whatsoever, just a browser. There is no client installation. They're able to self-serve, which means they can simply enter the URL, log in, and get started. This has been a tremendous cost reduction and also revenue growth for this product line in penetrating this new market segment.
In the same field of foreign exchange trading, we were able to help a number of Japanese banks take their products and services global. Traditionally, the market had been very service-intensive through a call center. You dialed in and placed your trade with the trader over the phone. By being able to move this entire platform to the Web, we allowed them to go global and go 24/7.
Now, we have over 30,000 institutional traders using this trading platform and application to self-serve through operations, not just in Tokyo, but in Singapore, London, New York, Frankfurt, literally around the world.
New capabilities
Not only has it extended the product line with very little additional operational cost to the banks, but it's also allowed them to provide new capabilities to those customers. One, for example, is the ability to be able to run continuous global book.
In the traditional implementations of trading platforms, each one would be an on-premises installation, which meant that each region would actually have to close their books and close out their operations at the end of their working day. Because it's now managed and provisioned in system, it can actually run globally, and allows them to maintain those books, and maintain common alerts across entities that themselves have a global footprint.
Not only were we getting them to a new market, but we were also allowing them to introduce new functionality. It allowed them to interact more closely with the customers, providing real-time chat facilities, and allowing the traders in Japan to interact directly with a trader as they exhibited certain behavior. It allowed them to offer custom contracts and has significantly increased the close rate of those applications.
So, a big impact in terms of market reach for the banks in Japan is one example. Let's take a look here at how we've been able to dramatically improve user productivity and dramatically reduce the business process time for large organizations.
This is a representation for one of the largest telecommunications groups in Europe. The challenge that they were facing is that they had a request for proposal (RFP) process that was very complicated. They had to be able to provide quotations for countrywide mobile platforms, a very large, complex design process, which was performed through one application, one legacy application as a product configurator.
Then, they would go to another application for doing the parts costing and bill of material assessment, another application for the pricing, and finally, an overall RFP approval process for these large $100 million-plus projects running over 10 years.
The whole process was taking them anywhere up to four weeks. It was fragmented. It was error prone. There were spreadsheets, and the files were flying around the globe, trying to complete this process.
What we were able to do for this organization was to streamline the process and present it as a single-branded Web-based workflow that brought all the different elements together, and, most importantly, ran on top of a SAP NetWeaver infrastructure. In fact, the workflow was designed to have an SAP look and feel.
End users didn't know when they were in or outside of SAP. They didn't care and they didn't need to, because as an end-to-end process, SAP acts as the overall system of record, providing a much higher degree of control, accuracy, and a dramatic reduction in errors.
The great result, from a user productivity point of view, is that they've been able to go from a process that took four weeks to a process that now takes four hours or even less -- a dramatic reduction. More important was the ability to increase the accuracy of these processes.
Desktop-like experience
These Web applications, I should stress, are really a desktop-like experience for the end user. We think of them and talk about them as a desktop in a browser. Everything that you could do as a desktop application with all the user navigation and productivity in very intense data environments, you can do in a browser-based application as deployed in this solution.
Let's take another look at another example where Web architecture and rich Web interfaces allowed us to dramatically improve customer loyalty and customer engagement.
You maybe familiar with the concept of the extended enterprise, whereby more and more organizations need to be able to open up traditionally back-office processes, and back-office systems still managed on the sort of green screen UIs in the bowels of the company. In order to be able to truly engage their customers and improve the process flow, more and more of those systems are being opened up and presented to their customers through rich, engaging Web applications.
This is an example of that. This is a company in the Netherlands called Waterdrinker, which is actually the largest flower distributor in Europe, a very significant business for them. We were helping them to create a Web-based, self-service ordering process that dramatically reduces the dependency on the use of customer service reps. It was similar to the scenario for the foreign-exchange trading platform. We were actually migrating customer interaction to the self-served Web platforms without the need for human intervention and costly CSRs.
But, it's much more than that. We're providing a much richer experience for the user, a much more engaging experience for the user, where they're able to more dynamically navigate through the catalog and determine the optimal order for them with all kinds of what-if calculations and analysis that are provided for them in real time at their own discretion.
The net result has been a significant increase in customer satisfaction, engagement, loyalty, We're yet to see it, because it's still relatively new, but just based on the amount of response reaction and conversion that we have seen through these Web-based interfaces, loyalty benefits will follow soon after. In addition, with a Web-based UI, you're able to easily and effectively customize the user interface with different users and communities.
In this case, they're able to provide a custom UI solution that integrates their catalog ordering process into their partners' processes. They distribute through local partners and local Websites, and they're able to provide this architecture as a white-label capability and then brand it according to the local distributor, delivering a rich branded experience through their partner.
Let's talk generally about competitive advantage. Obviously, all those things that we have talked about and shown with regard to different customers, and Dana has talked about in aggregate, offer all kinds of competitive advantage.
But, there's a certain element to competitive advantage that I would like to emphasize in this transformation process. Organizations, through the years, have basically instantiated and codified their best practices in the workflows within those legacy systems. Those business rules represent years of intelligence and competitive intelligence, and often the point at which you can realize tremendous competitive advantage.
Razor-thin margins
This is never truer than in the razor-thin margins of the consumer packaged goods (CPG) business, where a lot of the margin for a customer can actually be determined through the appropriate inventory, logistics, and pricing management, literally as goods are on route. What we've done for customers like these is to enable them to quickly and effectively extract the business rules that are buried in the legacy systems.
Frankly, nobody knows how they work anymore. They're not really very well documented at best, but we have allowed them to extract those business rules that represent the competitive advantage and consolidate them into a set of corporate-wide rules that can be more effectively managed.
One issue in a traditional legacy environment is that, as you establish business rules in terms of the legacy implementation, each one is monolithic. They start to create their own derivatives, as people program, tweak, and modify. At the end of a 10-year process, the rules barely resemble each other in each of the iterations.
In our transformed architecture, we're able to provide an environment, in which you can centrally manage, control, and modify those business rules and have them consistently and immediately applied across all the necessary touch points. Through this process, we can dramatically reduce human error.
This architecture allows us to provide support tools and business rules in a form that's readily accessible to the end user. You might say, "Wait a minute. It's a Web-based application, and when I'm sitting face to face with my customers, I'm not going to have access to the Web."
As you would expect in these solutions, we're able to architect them, so that the same application can be deployed as a Web application, or used as standalone. A great example of that is Aflac, where we created their premium calculation solution that is basically used across all the customer touch points, 38,000 users. And, 6,000 of those are agents who go door-to-door.
Part of the architecture and part of the challenge was to deliver that insurance calculation solution in such a way that when the agent is sitting across the kitchen table from their customer, they could still perform a level of custom quotation. They could produce the necessary documentation to be able to close the customer there and then as a standalone laptop with a local printer right across the kitchen table. That's all part of bringing those business rules that represent the years of competitive advantage successfully to the Web.
Let's take a look at how some of these capabilities impact the operations themselves. Here, we'll take an example of a call-center application. This was a transformation for the Pepsi bottling group of their customer-equipment tracking system It was a PowerBuilder application, maybe 10 years old, that we successfully moved to the Web.
The real business value in this is that by doing this, by creating a Web-based environment that could be deployed in any call center, we provide the flexibility and the agility for organizations to better utilize those call centers and better utilize that real estate, often consolidating from a large number of call centers to a smaller set of agile call centers, where they can put a lot of different processes through the same infrastructure.
Cost-management advantage
This has tremendous advantages, as you can imagine, in terms of cost management for those customers. We're even able to take that to the next step with the advent of voice-based telephony. It's now possible to engage home-office operators through a voice over Internet protocol (VoIP) infrastructure.
Those operators can not only have the benefit of the call center application as a Web based application accessible through their home broadband, but actually can have the same level of computer telephony integration (CTI) that they would have had, if they sat in the call center, by virtue of a series of VoIP based CTI technology that's available.
This is offering tremendous operating improvements in terms of, for example, real-estate consolidation. Also, looking at operations and the ability to optimize the use of the workforce, we have a situation here where we deploy a very complex laboratory information-management solution for the AmeriPath, now part of Quest Diagnostics. This is part of a pathology services process that requires very experienced technicians to participate.
The joy of being able to deploy this as a Web-based application means that you get great skills mobility, which means that technicians from anywhere, provided they have Web access, can actually participate in a diagnostic process, without the need to move the sensitive Health Insurance Portability and Accountability Act (HIPAA) data. So, HIPAA data that has to be stored in one place, can be made accessible to technicians through any location who can participate then in a process 24/7.
The value to IT is manifold. We'll take a quick look at some of those before we jump into the value equation itself. This is an example with SunGard Shareholder Systems, where they wanted to modernize their commercial product line, a 401k management application. I'm sure they're pretty busy these days.
It was originally deployed as an IBM-Oracle mainframe solution with a C++ front end. We modernized it through a pure Web application, but, from an IT development point of view, the benefits of being in that Web architecture are manifold. First and foremost, they were able to manage this entire development process with one person in the US, and a whole development team offshore in India, dramatically reducing the time and cost.
In this new architecture, the ability to respond to program change requests is tremendously different. We're able to program and change requests in one-tenth of the time and, by virtue of being a Web architecture, actually deploy those in now what are weekly release cycles, instead of six-monthly cycles that you would typically see as a point solution.
As we're running a little long here, I won't go into all of these, but there are many different ways in which the modern architecture really played into creating significant additional IT value.
We provide a process we call Nexaweb Advance, which is an end-to-end transformation process that allows us to dramatically reduce the time, risk, and costs of this overall implementation. It starts with a capture phase that is able to go in and interrogate legacy systems and dramatically reduce the amount of time and effort to document the code that typically is not well documented.
Then it goes through a model transformation process that dramatically reduces the amount of actual code that has to be written. In this example, it was a 65 percent reduction in the amount of code in the three million lines of application. The net result of that is through a typical designer development cycle, we were able to realize 50 percent or more reduction in the development time.
Having done that as a Web based application, there is no kind installation, no on-site provisioning. It's all centrally managed, so dramatic reductions in operating costs recognized by customers. In the example that we shared with you a little bit earlier, where, because we're in a modern object-oriented architecture with all the inheritance benefits that that brings, we're actually able to modify and execute change requests quite often in one-tenth of the time and then deploy them immediately and effectively as Web applications.
Announcer: Thanks, Adam. With that we conclude our podcast. You have been listening to a sponsored BriefingsDirect presentation taken from a recent Nexaweb webinar on application modernization. Please find more information on these solutions at Nexaweb.com. Thanks for listening and come back next time.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Sponsor: Nexaweb Technologies.
Transcript of a BriefingsDirect webinar with David McFarlane and Adam Markey on the economic and productivity advantages from application modernization. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Labels:
client-server,
Dana Gardner,
guerilla soa,
Interarbor Solutions,
IT modernization,
legacy,
Markey,
McFarlane,
Nexaweb,
WOA
Wednesday, March 11, 2009
BriefingsDirect Analysts Discuss Solutions for Bringing Human Interactions into Business Process Workflows
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management.
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 37.
This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, as well as with the support of TIBCO Software.
I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of Feb. 9, 2009, returns to the essential topic of bringing human activity into alignment with IT supported business processes.
The need to automate and extend complex processes is obvious. What's less obvious, however, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM). This will become all the more important, as cloud-based services become more common.
We're going to revisit the topic of BPEL4People, an OASIS specification that we discussed when it first arrived, probably a year-and-a-half ago. We'll also see how it's progressing with someone who has been working with the specification at OASIS since its beginning.
I'd like to welcome our guest this week, Michael Rowley, director of technology and strategy at Active Endpoints. Welcome, Mike.
Michael Rowley: Thank you.
Gardner: I'd also like to introduce our IT analyst guests this week. Our panel consists of regular Jim Kobielus, senior analyst at Forrester Research. Welcome back, Jim.
Jim Kobielus: Thanks, Dana. Hi, everybody.
Gardner: And someone who is beginning to become a regular, JP Morgenthal, independent analyst and IT consultant. Welcome back, JP.
JP Morgenthal: Thanks, Dana. Hi, everyone.
Gardner: Let's go to you first, Mike, as our guest. I've pointed out that Active Endpoints is the sponsor of the show, so I guess we will try to be nice to you, but I can't guarantee it. Tell us a little bit about your background. You were at BEA for some time. You've been involved with Service Component Architecture (SCA) and a few other open standards around OASIS. Give us the bio.
Rowley: I was at BEA for five years. I was involved in a couple of their BPM-related efforts. I led up the BPELJ spec effort there as part of the WebLogic integration team. I was working in the office of the CTO for a while and working on BPEL-related efforts. I also worked on the business process modeling notation (BPMN) 2.0 efforts while I was there.
I worked a little bit with the ALBPM team as well, and a variety of BPM-related work. Then, I've been at Active Endpoints for a little over half a year now. While here, I am working on BPEL4People standards, as well as on the product itself, and on some BPMN related stuff as well.
Gardner: Let's just jump into BPEL4Ppeople. Where do we stand, and is this getting traction to people? Not to be a punster, but do people grok BPEL and BPEL4People?
Good feedback
Rowley: We've had some very good feedback from our users on BPEL4People. People really like the idea of a standard in this area, and in particular, the big insight that's behind BPEL4People, which is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.
By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.
This is something that we're seeing more and more. Our users like it, and as far as the industry as a whole, the big vendors all seem to be very interested in this. We just recently had a face-to-face and we continue to get really good turnout, not just at these meetings, but there's also substantial effort between meetings. All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.
Gardner: I want to ask you a question, but at two levels. What is the problem that we're trying to solve here? Let's ask that first at the business level and then at the technical level?
Rowley: At the business level, it's pretty straightforward. It's essentially the promise of workflow systems, in which you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.
It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.
Gardner: Before you go to the technical issues, one of the things that's really interesting to me on this is that I understand the one-way street of needing to take processes, making that understood, and then finding out who the people are who can implement it. But, is this a two-way street?
Is it possible for the people who are involved with processes in the line of business, in the field, to then say, "Listen, this doesn't quite work?" Sometimes you can't plan things in advance. We have some insight as to what we think the process should be, how to improve it, and how can we then relate that back into what the SOA architecture is delivering." Are we on a two-way street on this?
Rowley: Absolutely. One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."
That's something that at least can be described by a layperson, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens. I'm leery about the end user, the nontechnical person, going in and mucking with fundamental control flow, without at least collaborating with somebody who can think about it from more of an IT angle.
Gardner: No. Clearly, we want to have a lifecycle between design, requirements and refinements, but not just throw the keys to the locker room out of the window. What is it technically that we need to overcome in order to solve those problems?
Need for standards
Rowley: I'm going to take this from a standards aspect, because one of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards.
Let's say a business school wants to describe how to do management and how to run your organization. Right now, I don't believe any of them have, as part of the coursework for getting an MBA, something that says, "Here's how you deal with the BPM engine to design and control your organizations."
The reason it isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.
Gardner: We're at that chicken and egg stage, aren't we, before we can get this really deeply adopted?
Rowley: Yes. I think we're spinning up. We're starting to get the kind of momentum that's necessary, with all the vendors getting on board. Oftentimes, with things like this, if the vendors can all get on the same bandwagon at the same time, the users get it. They see that, "Okay, now this is real. This is not just a standard that is a de jure standard, but it's actually a de facto standard as well."
Gardner: Let's go to Jim Kobielus. Jim, how important is this, and how might this chicken-and-egg conundrum get jump-started?
Kobielus: It's extremely important. One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.
It's a real drag on productivity, when you've got tasks coming from all angles at you and you're floundering, trying to find a way to manage them in a systematic way, to roll them up into a single worklist.
BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.
That's why I agree with Mike that it's critically important that the leading BPM and workflow vendors get on board with this standard. In many ways, I see BPEL4People as having a similar aim to business intelligence in general. Where business intelligence environments are geared towards providing a single view of all business metrics. BPEL4People is trying to provide a single view of all business processes that either you participate in or which you might manage.
Process steward
A term that I have batted around -- I don't think its really gained any currency -- is the notion of a process steward, somebody whose job it is to define, monitor, track, and optimize business processes to achieve greater productivity and agility for the business.
What Mike was getting at that was really interesting is the fact that you want an environment, a human workflow environment, that not only wraps up all of your tasks in a single worklist, regardless of a back-end execution engine. You also want the ability of not only the end user but especially the process steward, to begin to do what-if analysis in terms of re-engineering. They may have jurisdiction over several processes and have a single dashboard, as it were, looking at the current state and the dependencies of the various workflows they are responsible for.
This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.
Gardner: JP, do you agree with me on this two-way street, where the users, the people who are actually doing the work, feel like they are empowered at some level to contribute back into refinement? It seems to me that otherwise workers tend to say, "Okay, I can't have any say in this process. I don't agree with it. Basically, I do an end run around it. I'm going to find ways to do my work that suits me and my productivity." Then, that value and intelligence is lost and doesn't ever make it back into the automated workflow. How important from your perspective is this two-way street capability?
Morgenthal: I'm going to answer that, but I'd like to take a step back, if I could, to answer the business problem. Interestingly enough, I've been working on and researching this particular problem for the past few months. One interesting aspect from the business side is that this has been looked at for quite a while by the business, but hasn't fully been identified and ferreted out as a niche.
One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.
I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.
That's what you're talking about -- controlling what's happening in that black hole. It ties into the fact that humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.
An example that was mentioned earlier, where you have this thing that happens and somebody does something and then something else. The next step is going to analyze what that step does. The chances are that's related to some sort of content, probably semi-structured or maybe even unstructured content, something like a negotiation over what date something will occur. It's often human based, but when that date locks, something else will trigger, maybe the release of a document, or an invoice, or something out of an automated system.
So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult. I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools.
Solid foundation
One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.
They'll eventually get there. What you see today with regard to workflow and these BPM and workflow management tools is really around enterprise content management. "Jim approved this, so now Sally can go buy her ticket." Well, whoopie do. I could have done that with Ruby code in about ten minutes.
Gardner: It tends to follow a document trail rather than a process trail, right?
Morgenthal: Exactly. So, BPEL4People, from a standards perspective, is a standard route suspense tracking? All I'm controlling is going into the black hole and coming out of the black hole. Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.
Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user? One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.
The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.
Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.
Gardner: Mike, doesn't governance come into play in this as well? If we want to reach that proper balance between allowing the ad hoc and the worker-level inputs into the system, and controlling risk, security, compliance, and runaway complexity, aren't policies and governance engines designed to try to produce that balance and maintain it?
Morgenthal: Before he answers, Dana, I have one clarification on your question. "Ad hoc" is going to occur, whether you allow it to occur or not. You've got the right question: How can the business attain that governance?
Gardner: Okay.
Rowley: There is governance over a number of things. There's governance that's essentially authorization for individual operations or tasks about how can who change what documents, once its been signed. Who can sign? Who can modify what? That's at the level of an individual task.
Then there's also who can make a formal change to the process, as opposed to ad-hoc changes, where people go in and collaborate out of band, whether you tell them they can or not. But, in the formal process, who is allowed to do that? One nice thing about a BPM is that you have the ability to have authorization decisions over these various aspects of the business process.
Gardner: This strikes me as hugely important, particularly now in our economy. This is really the nub up against which productivity ends up getting hamstrung or caught up. If we're looking to do transformation level-benefits and bring business requirements and outcomes into alignment with IT, this is the real issue and it happens at so many different levels.
I can even see this progressing now towards complex event processing (CEP), where we want to start doing that level of high-scale and high-volume complex events across domains and organizational boundaries. But, again, we're going to bring people into that as well and reflect it both ways. Jim Kobielus, do you agree that this is hugely important and yet probably doesn't get a lot of attention?
Kobielus: The CEP angle?
Need for interactivity
Gardner: No, the overall issue of, if we can get transformational and we can get productivity that helps make the business and financial case for investing in things like SOA and CEP, than the issue of the interactivity between the tactile and the human and the automated and the systems needs to develop further.
Kobielus: That's a big question. Let me just break it down to its components. First, with CEP we're talking about real time. In many ways, it's often regarded as a subset of real-time business intelligence, where you have the consolidation, filtering, and aggregation of events from various sources being fed into a dashboard or to applications in which roles are triggered in real time and stuff happens.
In a broader sense, if you look at what's going on in a workflow environment, it's simply a collection of events, both those events that involve human decision makers and those events that involve automated decision agents and what not.
Looking at the fact that BPEL and BPEL4People are now two OASIS standards that have roughly equal standing is important. It reflects the fact that in an SOA, underlying all the interactions, all the different integration approaches, you have this big bus of events that are happening and firing all over the board. It's important to have a common orchestration and workflow framework within which both the actions of human beings and the actions of other decision agents can be coordinated and tracked in some unified way.
In terms of driving home the SOA value proposition, I'm not so sure that the event-driven architecture is so essential to most SOA projects, Dana, and so it's not clear to me that there is really a strong CEP component here. Fundamentally, when we're talking about workflows, we're talking about more time lags and asynchronous interactions. So, the events angle on it is sort of secondary.
Gardner: Let me take that back to Mike Rowley. I'm looking for a unified theory here that ties together some of what we have been talking about at the people process level with some of this other, larger event bus as Jim described at that more automated level. Are they related, or are they too abstract from one another?
Rowley: No, they're related. It's funny. I bought into everything that Jim was just saying, except for the very end, where he said that it's not really relevant. A workflow system or a business process is essentially an event-based system. CEP is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.
You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.
What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.
Eventing infrastructure
Kobielus: Exactly, the alerts and notifications are inherent in pretty much, any workflow environment. You're quite right. That's an eventing infrastructure and that's an essential component. I agree with you. I think the worklist can be conceptualized as an event dashboard with events relevant to one decision agent.
Rowley: It's more than just alerts and notifications. Any BPM can look for some threshold and give somebody a notice if some threshold has been exceeded. This is about doing things like joining over event streams or aggregating over event streams, the sorts of things that the general-purpose CEP capabilities are important for.
Gardner: JP, do you agree that we have some commonality here between CEP and its goals and value, and what we are talking about more at the human tactile workflow level?
Morgenthal: From my experience, what I've been looking at with regard to this is what I'm calling "business activity coordination." I think there is important data to be meted out after the fact about how certain processes are running in organizations. When companies talk about waste and reengineering processes, a lot of what they don't understand about processes, the reasons why they never end up changing, is because these ad-hoc areas are not well understood.
Some aspects of CEP could be helpful, if you could tag this stuff going on in that black hole in such a way that you could peer into the black hole. The issue with not being able to see in the black hole is not technical, though. It's human.
Most often, these things are distributed tasks. It's not like a process that's happening inside of accounting, where Sally walks over to Joe and hands him a particular invoice, and says, "Oh look, we could have just made that electronic." It's something leaving this division and going into that division, or it's going from this department to that department to that department. There is no stakeholder to own that process across all those departments, and data gets lost.
You're not going to find that with a CEP, because there are no automation tags at each one of those milestones. It could be useful to postmortem and reengineer after the fact, but somebody has got to gain hold that there is stuff happening in the back hole and automating in the black hole has to get started.
Kobielus: I've got a slightly better and terser answer than the one I gave a moment ago. A concept that's in BPM is business activity monitoring (BAM), essentially a dashboard of process metrics, generally presented to a manager or a steward. In human workflow, what is the equivalent of BAM -- being able to view in real time the running status of a given activity or process?
Gardner: There are also incentives, how you compensate people, reward them, and steer them to behaviors, right?
Morgenthal: On the dashboard, it’s like a remedy, when you have operations and you have trouble tickets, and how quickly are those trouble tickets are being responded to. It doesn't work. I'll tell you a funny example, which everyone out there is going to kick out of. At Sears, when you pick up stuff, after buying something big in the store, they have this monitor with this big flat screen and a list of where you are in the process after you scan your receipt. It shows you how long you're waiting.
What happens is the guy has learned how to overrun the system. He comes out, collects your ticket, and you are still sitting there for 30 minutes, but the clock has stopped on the screen. All of a sudden, behind you, is the thing that says, "We have 99.9 percent response rate. You never wait more than two minutes." Of course not. That guy took my ticket at 1 minute and 53 seconds and let me sit there for 30 minutes until my product came out.
Gardner: I think we're looking out for the best of both worlds. We want the best of what systems automation and documentation and repeat processes can do, but we also need that exception management that only a person can do, and we have all experience of how this can work or not work, particularly in a help desk situation.
Maybe you've have had the experience where you call up a help desk and the person says, "Well, I'd like to help you with that, but my process doesn't allow for it," or "We have no response for that particular situation, so I will have to go back to my supervisor," versus someone who says, "I've got a good process, but I can also work within that process for an exceptional level," and then perhaps bake that back into the process. Back to Mike Rowley.
CEP is core
Kobielus: Actually, Dana, I haven't finished my response, I just want to tie it to CEP. CEP is a core component of BAM quite often, event processing. BAM is basically the dashboard to aggregate events relevant to a given business process. In a human workflow, what is the equivalent of CEP and BAM? To some degree, it's social networks like Facebook, LinkedIn, or whatever, in the sense that I participate as a human being in a process that involves other human beings, who form a community -- my work group or just the workflow in which I'm involved.
How do I get a quick roll up of the status of this process or project or that matter in which I am just one participant? Well, the whole notion of a social network is that I can go there right away and determine what everybody is doing or where everybody else's status is in this overall process. Shouldn't that social network be fed by real time events, so I can know up to the second what Jean is doing, what Joe is doing, what Bob is doing, within the context of this overall workflow in which I am also involved?
So, CEP and BAM relate to social networks, and that's the way that human beings can orient themselves inside these workflows and can coordinate and enable that lateral side-to-side, real-time connection among human beings that's absolutely essential to getting stuff done in the real world. Then, you don't have to rely simply on the clunky asynchronous back-and-forth message passing, that we typically associate with workflows.
Gardner: Mike Rowley, we have a new variable in this, which is the social networking and the ability for people to come up with efficient means for finding a consensus or determining a need or want that hadn't been easily understood before. Is there a way of leveraging what we do within these social networks in a business process environment?
Rowley: Yes. Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility.
I have a slight quibble in that I would say that some of CEP is really oriented around automatic reaction to some sort of an event condition, rather than a human reaction. If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.
Gardner: We can certainly go into the whole kumbaya aspect of how this could all be wonderful and help solve the world's ills, but there is the interoperability issue that we need to come back to. As you were mentioning, there are a lot of vendors involved. There is a tendency for businesses to try to take as much of a role as they can with their platforms and tools. But, in order for the larger values that we are discussing to take place, we need to have the higher level of interoperability.
Realistically, Mike, from your perspective in working through OASIS, how well do the vendors recognize the need to give a little ground in order to get a higher value and economic and productivity payback?
Rowley: There seems to be a real priority given to getting this thing done and to getting it to be effective. The technologists involved in this effort understand that if we do this well, everybody will benefit. The whole market will grow tremendously, because people will see that this is an industry wide technology, it’s not a proprietary technology.
Active Endpoints is really at the forefront of having an implementation of BPEL4People in the user's hands, and so we're able to come to the table with very specific feedback on the specs, saying, "We need to make these changes to the coordination protocols," or "We may need to make these changes to the API," because it doesn't work for this, that, or the other reason. What we haven't seen is people pushing back in ways that would imply they just want to do things their own way.
Gardner: With all due respect, I know Active Endpoints is aggressive in this, but a company of your size isn't too likely to sway an entire industry quite yet. What about partnerships? People aren't pushing back, but how many people are putting wind in your sails as well?
Wholehearted adoption
Rowley: That's exactly what they're doing. They're basically adopting it wholeheartedly. We have had, I would say, a disproportionate impact on these specs, primarily because the people involved in them see the technical arguments as being valid. Technical arguments that come from experience tend to be the best ones, and people jump on.
Gardner: How about the professional services, systems integrators, and people like McKinseys who are organizational management focused? Wouldn't this make a great deal of sense for them? If you have a good strategic view as a vendor, you say, "Yes, we'll grow the pie. We'll all benefit. But, there is another whole class of consultant, professional services, and integrator that must clearly see the benefit of this without any need to maintain a position on a product or technology set.
Rowley: Through the standards effort, we haven't seen very much involvement by systems integrators. We have seen integrators that have really appreciated the value of us having a standard and knowledge, knowing that if they invest in learning the technology, they're not stuck if they invest and develop a framework.
Integrators often will have their own framework that they take from one to the other. If they build it on top of BPEL4People and WS-Human Task, they really get substantial investment protection, so that they don't have to be stuck, no matter what vendor they're picking. Right now, in our case, they pick Active Endpoints, because we have the earliest version.
Gardner: The question is JP that we've been hearing how the role of systems integrators and consultants is important in evangelizing and implementing these processes and helping with interoperability across the business as well as the human, as well as the systems. Do you see yourself as an evangelist, and why wouldn't other consultants also jump on the bandwagon?
Morgenthal: Well, I do take that role of helping to get out there to advance the industry. I think a lot of system integrators though are stuck with having to deal with day-to-day issues for clients. Their role is not to help drive new things as much as it is to respond to client need and heavily utilize the model.
Gardner: You've hit on something. Whose role is it? As Jim was saying, BAM makes sense at some level, but whose role is it to come in and orchestrate and manage efficiency and processes across these boundaries?
Morgenthal: Within the organization?
Gardner: Yes.
Morgenthal: It's the management, the internal management. It's their job to own these processes.
Gardner: So it's the operating officer?
Morgenthal: The COO should drive this stuff. I haven't yet seen a COO who takes these things by the hand and actually drives them through.
Gardner: Mike Rowley, who do you sell your Active Endpoints orchestration tools to?
Rowley: Primarily to end users, to enterprises, but we also sell to system integrators sometimes.
Gardner: But who inside of those organizations tends to be the inception point?
Rowley: Department level people who want to get work done. They want to develop an app or series of apps that help their users be productive.
Kobielus: It hasn't changed. I've written two books on workflow over the past 12 years, and workflow solutions are always deployed for tactical needs. The notion that companies are really itching to establish a general-purpose workflow orchestration infrastructure as a core of their SOA, so that they can then leverage out and extend for each new application that comes along isn't how it works in the real world. I think Mike has laid it out there.
As far as the notion that companies are looking to federate their existing investments -- whether Oracle, IBM, SAP, or others workflow environments -- by wrapping them all in a common SOA standards framework and make them interoperable, I don't see any real push in the corporate world to do that.
Morgenthal: One thing I really like about SOA is that it really should be the case that if you have got an overarching SOA mandate in the enterprise, that should enable lower-level, department-level freedom, as long as you fit with providing and consuming services.
BPM doesn't have to be an enterprise-wide decision, because it just gets clogged, too many decision makers have to sign off. If you get something like BPEL4People, it's really oriented around not just workflow in kind of the older workflow systems, but its workflow in a way that fits in a SOA, so that you can fit into that larger initiative without having to get overall approval.
Gardner: We're going to have to leave it there. We are about out of time. We've been discussing the issue of BPEL4People and better workflow productivity, trying to join systems and advances in automation with what works in the field, and somehow coordinating the two on a lifecycle adoption pattern. I'd like to thank our guests. We've been discussing this with Mike Rowley, director of technology and strategy at Active Endpoints. I appreciate your input, Mike.
Rowley: Thank you.
Gardner: We have also been joined by Jim Kobielus, senior analyst at Forrester Research; thank you Jim.
Kobielus: Yeah, thanks Dana, always a pleasure.
Gardner: Lastly, JP Morgenthal, independent analyst and IT consultant. You can be reached at www.jpmorgenthal.com. Is that the right address, JP?
Morgenthal: That's the right address, thank you, Dana.
Gardner: I'm Dana Gardner, principal analyst at Interarbor Solutions. I would like to thank our sponsors for today's podcast, Active Endpoints, maker of the ActiveVOS, Visual Orchestration System, as well as the support of TIBCO Software.
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Dana Gardner: Hello, and welcome to the latest BriefingsDirect Analyst Insights Edition, Volume 37.
This periodic discussion and dissection of IT infrastructure related news and events, with a panel of industry analysts and guests, comes to you with the help of our charter sponsor, Active Endpoints, maker of the ActiveVOS, visual orchestration system, as well as with the support of TIBCO Software.
I'm your host and moderator, Dana Gardner, principal analyst at Interarbor Solutions. Our topic this week, the week of Feb. 9, 2009, returns to the essential topic of bringing human activity into alignment with IT supported business processes.
The need to automate and extend complex processes is obvious. What's less obvious, however, is the need to join the physical world of people, their habits, needs, and perceptions with the artificial world of service-oriented architecture (SOA) and business process management (BPM). This will become all the more important, as cloud-based services become more common.
We're going to revisit the topic of BPEL4People, an OASIS specification that we discussed when it first arrived, probably a year-and-a-half ago. We'll also see how it's progressing with someone who has been working with the specification at OASIS since its beginning.
I'd like to welcome our guest this week, Michael Rowley, director of technology and strategy at Active Endpoints. Welcome, Mike.
Michael Rowley: Thank you.
Gardner: I'd also like to introduce our IT analyst guests this week. Our panel consists of regular Jim Kobielus, senior analyst at Forrester Research. Welcome back, Jim.
Jim Kobielus: Thanks, Dana. Hi, everybody.
Gardner: And someone who is beginning to become a regular, JP Morgenthal, independent analyst and IT consultant. Welcome back, JP.
JP Morgenthal: Thanks, Dana. Hi, everyone.
Gardner: Let's go to you first, Mike, as our guest. I've pointed out that Active Endpoints is the sponsor of the show, so I guess we will try to be nice to you, but I can't guarantee it. Tell us a little bit about your background. You were at BEA for some time. You've been involved with Service Component Architecture (SCA) and a few other open standards around OASIS. Give us the bio.
Rowley: I was at BEA for five years. I was involved in a couple of their BPM-related efforts. I led up the BPELJ spec effort there as part of the WebLogic integration team. I was working in the office of the CTO for a while and working on BPEL-related efforts. I also worked on the business process modeling notation (BPMN) 2.0 efforts while I was there.
I worked a little bit with the ALBPM team as well, and a variety of BPM-related work. Then, I've been at Active Endpoints for a little over half a year now. While here, I am working on BPEL4People standards, as well as on the product itself, and on some BPMN related stuff as well.
Gardner: Let's just jump into BPEL4Ppeople. Where do we stand, and is this getting traction to people? Not to be a punster, but do people grok BPEL and BPEL4People?
Good feedback
Rowley: We've had some very good feedback from our users on BPEL4People. People really like the idea of a standard in this area, and in particular, the big insight that's behind BPEL4People, which is that there's a different standard for WS-Human Task. It's basically keeping track of the worklist aspect of a business process versus the control flow that you get in the BPEL4People side of the standard. So, there's BPEL4People as one standard and the WS-Human Task as another closely related standard.
By having this dichotomy you can have your worklist system completely standards based, but not necessarily tied to your workflow system or BPM engine. We've had customers actually use that. We've had at least one customer that's decided to implement their own human task worklist system, rather than using the one that comes out of the box, and know that what they have created is standards compliant.
This is something that we're seeing more and more. Our users like it, and as far as the industry as a whole, the big vendors all seem to be very interested in this. We just recently had a face-to-face and we continue to get really good turnout, not just at these meetings, but there's also substantial effort between meetings. All of the companies involved -- Oracle, IBM, SAP, Microsoft, and TIBCO, as well as Active Endpoints -- seem to be very interested in this. One interesting one is Microsoft. They are also putting in some special effort here.
Gardner: I want to ask you a question, but at two levels. What is the problem that we're trying to solve here? Let's ask that first at the business level and then at the technical level?
Rowley: At the business level, it's pretty straightforward. It's essentially the promise of workflow systems, in which you can automate the way people work with their computers and interact with other people by pulling tasks off of a worklist and then having a central system, the BPM engine, keep track of who should do the next thing, look at the results of what they have done, and based on the data, send things for approval.
It basically captures the business process, the actual functioning of a business, in software in a way that you can change over time. It's flexible, but you can also track things, and that kind of thing is basic.
Gardner: Before you go to the technical issues, one of the things that's really interesting to me on this is that I understand the one-way street of needing to take processes, making that understood, and then finding out who the people are who can implement it. But, is this a two-way street?
Is it possible for the people who are involved with processes in the line of business, in the field, to then say, "Listen, this doesn't quite work?" Sometimes you can't plan things in advance. We have some insight as to what we think the process should be, how to improve it, and how can we then relate that back into what the SOA architecture is delivering." Are we on a two-way street on this?
Rowley: Absolutely. One value of a BPM engine is that you should be able to have a software system, where the overall control flow, what's happening, how the business is being run can be at the very least read by a nontechnical user. They can see that and say, "You know, we're going through too many steps here. We really can skip this step. When the amount of money being dealt with is less than $500, we should take this shortcut."
That's something that at least can be described by a layperson, and it should be conveyed with very little effort to a technical person who will get it or who will make the change to get it so that the shortcut happens. I'm leery about the end user, the nontechnical person, going in and mucking with fundamental control flow, without at least collaborating with somebody who can think about it from more of an IT angle.
Gardner: No. Clearly, we want to have a lifecycle between design, requirements and refinements, but not just throw the keys to the locker room out of the window. What is it technically that we need to overcome in order to solve those problems?
Need for standards
Rowley: I'm going to take this from a standards aspect, because one of the hardest questions is what you standardize and how you divvy up the standards. One thing that has slowed down this whole vision of automating business process is the adoption of standards.
Let's say a business school wants to describe how to do management and how to run your organization. Right now, I don't believe any of them have, as part of the coursework for getting an MBA, something that says, "Here's how you deal with the BPM engine to design and control your organizations."
The reason it isn't at that level of adoption yet is because the standards are new and just being developed. People have to be quite comfortable that, if they're going to invest in a technology that's running their organization, this is not just some proprietary technology.
Gardner: We're at that chicken and egg stage, aren't we, before we can get this really deeply adopted?
Rowley: Yes. I think we're spinning up. We're starting to get the kind of momentum that's necessary, with all the vendors getting on board. Oftentimes, with things like this, if the vendors can all get on the same bandwagon at the same time, the users get it. They see that, "Okay, now this is real. This is not just a standard that is a de jure standard, but it's actually a de facto standard as well."
Gardner: Let's go to Jim Kobielus. Jim, how important is this, and how might this chicken-and-egg conundrum get jump-started?
Kobielus: It's extremely important. One thing that users are challenged with all the time in business is the fact that they are participating in so many workflows, so many business processes. They have to multi-task, and they have to have multiple worklists and to-do lists that they are checking all the time. It's just a bear to keep up with.
It's a real drag on productivity, when you've got tasks coming from all angles at you and you're floundering, trying to find a way to manage them in a systematic way, to roll them up into a single worklist.
BPEL4People, by providing an interoperability framework for worklisting capabilities of human workflow systems, offers the promise of allowing organizations to help users have a single view of all of their tasks and all the workflows in which they are participating. That will be a huge productivity gain for the average information worker, if that ever comes to pass.
That's why I agree with Mike that it's critically important that the leading BPM and workflow vendors get on board with this standard. In many ways, I see BPEL4People as having a similar aim to business intelligence in general. Where business intelligence environments are geared towards providing a single view of all business metrics. BPEL4People is trying to provide a single view of all business processes that either you participate in or which you might manage.
Process steward
A term that I have batted around -- I don't think its really gained any currency -- is the notion of a process steward, somebody whose job it is to define, monitor, track, and optimize business processes to achieve greater productivity and agility for the business.
What Mike was getting at that was really interesting is the fact that you want an environment, a human workflow environment, that not only wraps up all of your tasks in a single worklist, regardless of a back-end execution engine. You also want the ability of not only the end user but especially the process steward, to begin to do what-if analysis in terms of re-engineering. They may have jurisdiction over several processes and have a single dashboard, as it were, looking at the current state and the dependencies of the various workflows they are responsible for.
This is critically important for SOA, where SOA applications for human workflows are at the very core of the application.
Gardner: JP, do you agree with me on this two-way street, where the users, the people who are actually doing the work, feel like they are empowered at some level to contribute back into refinement? It seems to me that otherwise workers tend to say, "Okay, I can't have any say in this process. I don't agree with it. Basically, I do an end run around it. I'm going to find ways to do my work that suits me and my productivity." Then, that value and intelligence is lost and doesn't ever make it back into the automated workflow. How important from your perspective is this two-way street capability?
Morgenthal: I'm going to answer that, but I'd like to take a step back, if I could, to answer the business problem. Interestingly enough, I've been working on and researching this particular problem for the past few months. One interesting aspect from the business side is that this has been looked at for quite a while by the business, but hasn't fully been identified and ferreted out as a niche.
One key term that has been applied here industry wide I found only in the government. They call this "suspense tracking." That's a way of saying that something leaves the process and goes into "ad hoc land." We don't know what happens in there, but we control when it leaves and we control when it comes back.
I've actually extended this concept quite a bit and I am working on getting some papers and reports written around something I am terming "business activity coordination," which is a way to control what's in the black hole.
That's what you're talking about -- controlling what's happening in that black hole. It ties into the fact that humans interact with humans, humans interact with machines, and data is changing everywhere. How do we keep everything on track, how do we keep everything coordinated, when you have a whole bunch of ad-hoc processes hitting this standardized process? That requires some unique features. It requires the ability to aggregate different content types together into a single place.
An example that was mentioned earlier, where you have this thing that happens and somebody does something and then something else. The next step is going to analyze what that step does. The chances are that's related to some sort of content, probably semi-structured or maybe even unstructured content, something like a negotiation over what date something will occur. It's often human based, but when that date locks, something else will trigger, maybe the release of a document, or an invoice, or something out of an automated system.
So, you have these ongoing ad hoc processes that occur in business everyday and are difficult to automate. I've been analyzing solutions to this, and business activity coordination is that overlap, the Venn diagram, if you will, of process-centric and collaborative actions. For a human to contribute back and for a machine to recognize that the dataset has changed, move forward, and take the appropriate actions from a process-centric standpoint, after a collaborative activity is taking place is possible today, but is very difficult. I don't necessarily agree with the statement earlier that we need to have tight control of this. A lot of this can be managed by the users themselves, using common tools.
Solid foundation
One thing I'm looking at is how SharePoint, more specifically Windows SharePoint Services, acts as a solid foundation that allows humans and machines to interact nicely. It comes with a core portal that allows humans to visualize and change the data, but the behavioral connections to actually notify workflows that it's time to go to the next step, based on those human activities, are really critical functions. I don't see them widely available through today's workflow and BPM tools. In fact, those tools fall short, because of their inability to recognize these datasets.
They'll eventually get there. What you see today with regard to workflow and these BPM and workflow management tools is really around enterprise content management. "Jim approved this, so now Sally can go buy her ticket." Well, whoopie do. I could have done that with Ruby code in about ten minutes.
Gardner: It tends to follow a document trail rather than a process trail, right?
Morgenthal: Exactly. So, BPEL4People, from a standards perspective, is a standard route suspense tracking? All I'm controlling is going into the black hole and coming out of the black hole. Neither WS-Human Task nor BPEL4People addresses how I control what's happening inside the black hole.
Rowley: Actually it does. The WS-Human Task does talk about how do you control what's in the black hole -- what happens to a task and what kind of things can happen to a task while its being handled by a user? One of the things about Microsoft involvement in the standards committee is that they have been sharing a lot with us about SharePoint and we have been discussing it. This is all public. The nice thing about OASIS is that everything we do is in public, along with the meeting notes.
The Microsoft people are giving us demonstration of SharePoint, and we can envision as an industry, as a bunch of vendors, a possibility of interoperability with a BPEL4People business process engine like the ActiveVOS server. Maybe somebody doesn't want to use our worklist system and wants to use SharePoint, and some future version of SharePoint will have an implementation of WS-Human Task, or possibly somebody else will do an implementation of WS-Human Task.
Until you get the standard, that vision that JP mentioned about having somebody use SharePoint and having some BPM engine be able to coordinate it, isn't possible. We need these standards to accomplish that.
Gardner: Mike, doesn't governance come into play in this as well? If we want to reach that proper balance between allowing the ad hoc and the worker-level inputs into the system, and controlling risk, security, compliance, and runaway complexity, aren't policies and governance engines designed to try to produce that balance and maintain it?
Morgenthal: Before he answers, Dana, I have one clarification on your question. "Ad hoc" is going to occur, whether you allow it to occur or not. You've got the right question: How can the business attain that governance?
Gardner: Okay.
Rowley: There is governance over a number of things. There's governance that's essentially authorization for individual operations or tasks about how can who change what documents, once its been signed. Who can sign? Who can modify what? That's at the level of an individual task.
Then there's also who can make a formal change to the process, as opposed to ad-hoc changes, where people go in and collaborate out of band, whether you tell them they can or not. But, in the formal process, who is allowed to do that? One nice thing about a BPM is that you have the ability to have authorization decisions over these various aspects of the business process.
Gardner: This strikes me as hugely important, particularly now in our economy. This is really the nub up against which productivity ends up getting hamstrung or caught up. If we're looking to do transformation level-benefits and bring business requirements and outcomes into alignment with IT, this is the real issue and it happens at so many different levels.
I can even see this progressing now towards complex event processing (CEP), where we want to start doing that level of high-scale and high-volume complex events across domains and organizational boundaries. But, again, we're going to bring people into that as well and reflect it both ways. Jim Kobielus, do you agree that this is hugely important and yet probably doesn't get a lot of attention?
Kobielus: The CEP angle?
Need for interactivity
Gardner: No, the overall issue of, if we can get transformational and we can get productivity that helps make the business and financial case for investing in things like SOA and CEP, than the issue of the interactivity between the tactile and the human and the automated and the systems needs to develop further.
Kobielus: That's a big question. Let me just break it down to its components. First, with CEP we're talking about real time. In many ways, it's often regarded as a subset of real-time business intelligence, where you have the consolidation, filtering, and aggregation of events from various sources being fed into a dashboard or to applications in which roles are triggered in real time and stuff happens.
In a broader sense, if you look at what's going on in a workflow environment, it's simply a collection of events, both those events that involve human decision makers and those events that involve automated decision agents and what not.
Looking at the fact that BPEL and BPEL4People are now two OASIS standards that have roughly equal standing is important. It reflects the fact that in an SOA, underlying all the interactions, all the different integration approaches, you have this big bus of events that are happening and firing all over the board. It's important to have a common orchestration and workflow framework within which both the actions of human beings and the actions of other decision agents can be coordinated and tracked in some unified way.
In terms of driving home the SOA value proposition, I'm not so sure that the event-driven architecture is so essential to most SOA projects, Dana, and so it's not clear to me that there is really a strong CEP component here. Fundamentally, when we're talking about workflows, we're talking about more time lags and asynchronous interactions. So, the events angle on it is sort of secondary.
Gardner: Let me take that back to Mike Rowley. I'm looking for a unified theory here that ties together some of what we have been talking about at the people process level with some of this other, larger event bus as Jim described at that more automated level. Are they related, or are they too abstract from one another?
Rowley: No, they're related. It's funny. I bought into everything that Jim was just saying, except for the very end, where he said that it's not really relevant. A workflow system or a business process is essentially an event-based system. CEP is real-time business intelligence. You put those two together and you discover that the events that are in your business process are inherently valuable events.
You need to be able to discover over a wide variety of business processes, a wide variety of documents, or wide variety of sources, and be able to look for averages, aggregations and sums, and the joining over these various things to discover a situation where you need to automatically kickoff new work. New work is a task or a business process.
What you don't want to have is for somebody to have to go in and monitor or discover by hand that something needs to be reacted to. If you have something like what we have with ActiveVOS, which is a CEP engine embedded with your BPM, then the events that are naturally business relevant, that are in your BPM, can be fed into your CEP, and then you can have intelligent reaction to everyday business.
Eventing infrastructure
Kobielus: Exactly, the alerts and notifications are inherent in pretty much, any workflow environment. You're quite right. That's an eventing infrastructure and that's an essential component. I agree with you. I think the worklist can be conceptualized as an event dashboard with events relevant to one decision agent.
Rowley: It's more than just alerts and notifications. Any BPM can look for some threshold and give somebody a notice if some threshold has been exceeded. This is about doing things like joining over event streams or aggregating over event streams, the sorts of things that the general-purpose CEP capabilities are important for.
Gardner: JP, do you agree that we have some commonality here between CEP and its goals and value, and what we are talking about more at the human tactile workflow level?
Morgenthal: From my experience, what I've been looking at with regard to this is what I'm calling "business activity coordination." I think there is important data to be meted out after the fact about how certain processes are running in organizations. When companies talk about waste and reengineering processes, a lot of what they don't understand about processes, the reasons why they never end up changing, is because these ad-hoc areas are not well understood.
Some aspects of CEP could be helpful, if you could tag this stuff going on in that black hole in such a way that you could peer into the black hole. The issue with not being able to see in the black hole is not technical, though. It's human.
Most often, these things are distributed tasks. It's not like a process that's happening inside of accounting, where Sally walks over to Joe and hands him a particular invoice, and says, "Oh look, we could have just made that electronic." It's something leaving this division and going into that division, or it's going from this department to that department to that department. There is no stakeholder to own that process across all those departments, and data gets lost.
You're not going to find that with a CEP, because there are no automation tags at each one of those milestones. It could be useful to postmortem and reengineer after the fact, but somebody has got to gain hold that there is stuff happening in the back hole and automating in the black hole has to get started.
Kobielus: I've got a slightly better and terser answer than the one I gave a moment ago. A concept that's in BPM is business activity monitoring (BAM), essentially a dashboard of process metrics, generally presented to a manager or a steward. In human workflow, what is the equivalent of BAM -- being able to view in real time the running status of a given activity or process?
Gardner: There are also incentives, how you compensate people, reward them, and steer them to behaviors, right?
Morgenthal: On the dashboard, it’s like a remedy, when you have operations and you have trouble tickets, and how quickly are those trouble tickets are being responded to. It doesn't work. I'll tell you a funny example, which everyone out there is going to kick out of. At Sears, when you pick up stuff, after buying something big in the store, they have this monitor with this big flat screen and a list of where you are in the process after you scan your receipt. It shows you how long you're waiting.
What happens is the guy has learned how to overrun the system. He comes out, collects your ticket, and you are still sitting there for 30 minutes, but the clock has stopped on the screen. All of a sudden, behind you, is the thing that says, "We have 99.9 percent response rate. You never wait more than two minutes." Of course not. That guy took my ticket at 1 minute and 53 seconds and let me sit there for 30 minutes until my product came out.
Gardner: I think we're looking out for the best of both worlds. We want the best of what systems automation and documentation and repeat processes can do, but we also need that exception management that only a person can do, and we have all experience of how this can work or not work, particularly in a help desk situation.
Maybe you've have had the experience where you call up a help desk and the person says, "Well, I'd like to help you with that, but my process doesn't allow for it," or "We have no response for that particular situation, so I will have to go back to my supervisor," versus someone who says, "I've got a good process, but I can also work within that process for an exceptional level," and then perhaps bake that back into the process. Back to Mike Rowley.
CEP is core
Kobielus: Actually, Dana, I haven't finished my response, I just want to tie it to CEP. CEP is a core component of BAM quite often, event processing. BAM is basically the dashboard to aggregate events relevant to a given business process. In a human workflow, what is the equivalent of CEP and BAM? To some degree, it's social networks like Facebook, LinkedIn, or whatever, in the sense that I participate as a human being in a process that involves other human beings, who form a community -- my work group or just the workflow in which I'm involved.
How do I get a quick roll up of the status of this process or project or that matter in which I am just one participant? Well, the whole notion of a social network is that I can go there right away and determine what everybody is doing or where everybody else's status is in this overall process. Shouldn't that social network be fed by real time events, so I can know up to the second what Jean is doing, what Joe is doing, what Bob is doing, within the context of this overall workflow in which I am also involved?
So, CEP and BAM relate to social networks, and that's the way that human beings can orient themselves inside these workflows and can coordinate and enable that lateral side-to-side, real-time connection among human beings that's absolutely essential to getting stuff done in the real world. Then, you don't have to rely simply on the clunky asynchronous back-and-forth message passing, that we typically associate with workflows.
Gardner: Mike Rowley, we have a new variable in this, which is the social networking and the ability for people to come up with efficient means for finding a consensus or determining a need or want that hadn't been easily understood before. Is there a way of leveraging what we do within these social networks in a business process environment?
Rowley: Yes. Tying event processing to social networks makes sense, because what you need to have when you're in a social network is visibility, visibility into what's going on in the business and what's going on with other people. BPM is all about providing visibility.
I have a slight quibble in that I would say that some of CEP is really oriented around automatic reaction to some sort of an event condition, rather than a human reaction. If humans are involved in discovering something, looking something up, or watching something, I think of it more as either monitoring or reporting, but that's just a terminology. Either way, events and visibility are really critical.
Gardner: We can certainly go into the whole kumbaya aspect of how this could all be wonderful and help solve the world's ills, but there is the interoperability issue that we need to come back to. As you were mentioning, there are a lot of vendors involved. There is a tendency for businesses to try to take as much of a role as they can with their platforms and tools. But, in order for the larger values that we are discussing to take place, we need to have the higher level of interoperability.
Realistically, Mike, from your perspective in working through OASIS, how well do the vendors recognize the need to give a little ground in order to get a higher value and economic and productivity payback?
Rowley: There seems to be a real priority given to getting this thing done and to getting it to be effective. The technologists involved in this effort understand that if we do this well, everybody will benefit. The whole market will grow tremendously, because people will see that this is an industry wide technology, it’s not a proprietary technology.
Active Endpoints is really at the forefront of having an implementation of BPEL4People in the user's hands, and so we're able to come to the table with very specific feedback on the specs, saying, "We need to make these changes to the coordination protocols," or "We may need to make these changes to the API," because it doesn't work for this, that, or the other reason. What we haven't seen is people pushing back in ways that would imply they just want to do things their own way.
Gardner: With all due respect, I know Active Endpoints is aggressive in this, but a company of your size isn't too likely to sway an entire industry quite yet. What about partnerships? People aren't pushing back, but how many people are putting wind in your sails as well?
Wholehearted adoption
Rowley: That's exactly what they're doing. They're basically adopting it wholeheartedly. We have had, I would say, a disproportionate impact on these specs, primarily because the people involved in them see the technical arguments as being valid. Technical arguments that come from experience tend to be the best ones, and people jump on.
Gardner: How about the professional services, systems integrators, and people like McKinseys who are organizational management focused? Wouldn't this make a great deal of sense for them? If you have a good strategic view as a vendor, you say, "Yes, we'll grow the pie. We'll all benefit. But, there is another whole class of consultant, professional services, and integrator that must clearly see the benefit of this without any need to maintain a position on a product or technology set.
Rowley: Through the standards effort, we haven't seen very much involvement by systems integrators. We have seen integrators that have really appreciated the value of us having a standard and knowledge, knowing that if they invest in learning the technology, they're not stuck if they invest and develop a framework.
Integrators often will have their own framework that they take from one to the other. If they build it on top of BPEL4People and WS-Human Task, they really get substantial investment protection, so that they don't have to be stuck, no matter what vendor they're picking. Right now, in our case, they pick Active Endpoints, because we have the earliest version.
Gardner: The question is JP that we've been hearing how the role of systems integrators and consultants is important in evangelizing and implementing these processes and helping with interoperability across the business as well as the human, as well as the systems. Do you see yourself as an evangelist, and why wouldn't other consultants also jump on the bandwagon?
Morgenthal: Well, I do take that role of helping to get out there to advance the industry. I think a lot of system integrators though are stuck with having to deal with day-to-day issues for clients. Their role is not to help drive new things as much as it is to respond to client need and heavily utilize the model.
Gardner: You've hit on something. Whose role is it? As Jim was saying, BAM makes sense at some level, but whose role is it to come in and orchestrate and manage efficiency and processes across these boundaries?
Morgenthal: Within the organization?
Gardner: Yes.
Morgenthal: It's the management, the internal management. It's their job to own these processes.
Gardner: So it's the operating officer?
Morgenthal: The COO should drive this stuff. I haven't yet seen a COO who takes these things by the hand and actually drives them through.
Gardner: Mike Rowley, who do you sell your Active Endpoints orchestration tools to?
Rowley: Primarily to end users, to enterprises, but we also sell to system integrators sometimes.
Gardner: But who inside of those organizations tends to be the inception point?
Rowley: Department level people who want to get work done. They want to develop an app or series of apps that help their users be productive.
Kobielus: It hasn't changed. I've written two books on workflow over the past 12 years, and workflow solutions are always deployed for tactical needs. The notion that companies are really itching to establish a general-purpose workflow orchestration infrastructure as a core of their SOA, so that they can then leverage out and extend for each new application that comes along isn't how it works in the real world. I think Mike has laid it out there.
As far as the notion that companies are looking to federate their existing investments -- whether Oracle, IBM, SAP, or others workflow environments -- by wrapping them all in a common SOA standards framework and make them interoperable, I don't see any real push in the corporate world to do that.
Morgenthal: One thing I really like about SOA is that it really should be the case that if you have got an overarching SOA mandate in the enterprise, that should enable lower-level, department-level freedom, as long as you fit with providing and consuming services.
BPM doesn't have to be an enterprise-wide decision, because it just gets clogged, too many decision makers have to sign off. If you get something like BPEL4People, it's really oriented around not just workflow in kind of the older workflow systems, but its workflow in a way that fits in a SOA, so that you can fit into that larger initiative without having to get overall approval.
Gardner: We're going to have to leave it there. We are about out of time. We've been discussing the issue of BPEL4People and better workflow productivity, trying to join systems and advances in automation with what works in the field, and somehow coordinating the two on a lifecycle adoption pattern. I'd like to thank our guests. We've been discussing this with Mike Rowley, director of technology and strategy at Active Endpoints. I appreciate your input, Mike.
Rowley: Thank you.
Gardner: We have also been joined by Jim Kobielus, senior analyst at Forrester Research; thank you Jim.
Kobielus: Yeah, thanks Dana, always a pleasure.
Gardner: Lastly, JP Morgenthal, independent analyst and IT consultant. You can be reached at www.jpmorgenthal.com. Is that the right address, JP?
Morgenthal: That's the right address, thank you, Dana.
Gardner: I'm Dana Gardner, principal analyst at Interarbor Solutions. I would like to thank our sponsors for today's podcast, Active Endpoints, maker of the ActiveVOS, Visual Orchestration System, as well as the support of TIBCO Software.
Listen to the podcast. Download the podcast. Find it on iTunes and Podcast.com. Learn more. Charter Sponsor: Active Endpoints. Additional underwriting by TIBCO Software.
Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.
Edited transcript of BriefingsDirect Analyst Insights Edition podcast, Vol. 37 on aligning human interaction with business process management. Copyright Interarbor Solutions, LLC, 2005-2009. All rights reserved.
Labels:
Active Endpoints,
BPEL4People,
BPM,
Dana Gardner,
Interarbor Solutions,
Kobielus,
Morgenthal,
Rowley,
TIBCO,
WS_Human Task
Subscribe to:
Posts (Atom)