Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.
For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.
Thanks for joining this sponsored podcast discussion that examines a new book on application lifecycle management (ALM) best practices, one that offers some new methods for overall business services delivery improvement. Complexity, silos of technology and culture, as well as the shifting landscape of applications’ delivery options have all conspired to reduce the effectiveness of traditional applications’ approaches in large organizations.
In the book, called The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle, the authors pursue the role and impact of automation and management over applications, as well as delving into the need to gain control over applications through a holistic lifecycle perspective.
In this podcast, the last in a series of three, we'll underscore the conclusions from the book and explain how organizations can begin now to change how they deliver and maintain applications in a fast-changing world.
In our first podcast, we focused on the role and impact of automation and management of applications, and emphasized the need to gain control over applications through a holistic lifecycle perspective.
The second discussion in our series looked at how an enterprise, Delta Air Lines, moved successfully to improve its applications’ quality, and gain the ability to deliver better business results from those applications.
Finally, here we'll discover how to access and how well you can develop applications as an essential lifecycle core competency and begin to chart a course toward improvement. That's just in time because the topic of ALM will be a big one at next week's HP Software Universe conference in Barcelona.
But we're here now with the book’s authors to explore their conclusions. Please join me in welcoming Mark Sarbiewski, Vice President of Marketing for HP Applications, and Brad Hipps, Senior Manager of Solution Marketing for HP Applications. Welcome to you both.
Mark Sarbiewski: Thank you.
Brad Hipps: Thank you.
Gardner: We're now at the point where organizations recognize that they need to do something differently. They have a very complex application situation, and they certainly have a fast-changing set of business requirements. The stakes are very high.
How then do companies know where they are in the app spectrum? Obviously, there’s going to be variability from company to company. Yet how do you know as an individual organization where you stand in terms of application lifecycle competencies? Let’s start with you, Mark.
ALM maturity
Sarbiewski: Companies are truly interested to understand where they rank, what they do well, where their gaps are, and where they fall against their competition, their colleagues, or other folks in their industry, and even against best practice in other industries. So we built out a model for ALM maturity, and it’s in the book.
We wanted to take a slightly different approach to how we thought about maturity models. There are lots of them in the industry, not so much around ALM, but in sub-disciplines or in different areas. Our focus was the business outcomes that you see at different levels.
If you can understand the results that you are seeing, that ought to help you figure out where am I in terms of where I could be. What we've seen is a progression from the spectrum of companies, where they are really getting started. They have fairly immature processes. They're across the lifecycle of an application, and all the way up to very advanced.
One thing I would mention, before I go further, is that the life of an application is generally the same for all companies. There is a spark of an idea: "We need this. We need software to help us do something in the business."
We make an investment decision somehow. We may do this ad hoc. We may do it based on who screams the loudest. But, somehow a decision gets made. We build something somehow. We spec it, build it, release it, run it, poorly or not, and hopefully, although certainly not always, eventually we replace it, retire it, and so forth.
So, our idea around maturity and tying it to outcomes is the results that we see. For example, what’s our batting average for how many times we actually make the right kind of investment decisions? How many times do we execute against a good investment decision? How many times do we run it well and meet our SLAs in production and so on?
We see people just getting started, and they have a relatively ad hoc, narrow, point tool, with lots of manual work. It doesn’t mean they are never successful, but results vary highly. They're very mixed. Some project teams are great, and it all depends on the project team, and the next one may stink.
As you move up the curve, you start to see a maturity in the functional disciplines. We see them get better at requirements management. We see them get better at testing, designing software, or handing off, releasing into production. You see the functional competence begin to evolve. That has to happen first, before you can start to tie these functions together and begin to get cross-functional excellence.
There is a huge benefit in getting good at your functions. And, there is another big jump in return on investment (ROI) of getting better at having my functions and departments work well together. At the highest level, you start to be able to execute very complex programs, with lots of projects, across lots of functions every time. We talk about a level of portfolio excellence there.
So, it all comes back to the results. What kind of results am I seeing? If you look at the model in the book, it’s pretty easy to peg yourself as to where you are and the kinds of benefits you'd see from moving up that maturity curve.
Gardner: Brad Hipps, do you have anything to offer further on knowing where you are so that you can know where you need to go?
More of a scorecard
Hipps: As Mark has said, we configured this model, trying deliberately not to be ultra-prescriptive. There are many heavy-duty models that do exist, and people can dig into those to their heart’s content. This is as much a maturity scorecard as anything.
One of the examples that you might see or one of the ways you might begin to engage yourself is something like defect leakage. Defect leakage refers to the number of defects that you discover in live in the application that you could have caught earlier.
We have some figures that show that the average is in the neighborhood of 40 percent of application defects that leak into production and are discovered in live. They could have been caught earlier. It may be little higher than 40 percent, which is a fairly shocking number. Obviously, that’s a rough average. So, you've got to expect, if you are lower in maturity, that you may be even seeing more than that.
But on the high end, the world-class customers we worked with, see less than 5 percent of defects working their way into production. So right off the bat there, you're talking an 80 percent-plus drop in the number of defects that you're experiencing in a live environment, with all the attendant cost savings, brand improvement, and good will in the business that you would expect.
That’s one example of the kind of thing that you can look at, tease out, and begin to get a sense of where might I sit maturity wise. From that, you can potentially take a cue as to where is it that I want to start, where is it that I want to make the biggest investment, as I look to make myself more mature?
There are hosts of sophisticated KPIs we can design for ourselves, but one of the key ones was, "I want to know what the business thinks of us, and whether we are trending in the right direction."
Gardner: Brad, I suppose while it’s important to know who you are in order to chart where you are going to go, it would be nice to know how well you are doing along the way. Are there measurements of success here in your book that you can point to of how people can take score of how well they are progressing and then reinforce or move even further forward?
Hipps: I'll give a simple one. At least, I hope it’s a simple. I can’t speak for every enterprise, but this is one that I have used in my own history, and it’s no more complex than customer satisfaction.
In this case, your customers may be end users, who are harder sometimes to survey. But, more often than not, your customers are some business units, somebody within the business.
When I was running application teams, we were undertaking initiatives to improve ourselves, which is probably a nonstop undertaking within IT. Sometimes, you go through peaks and valleys, but that became one of my key checkpoints, as you might imagine. There are hosts of sophisticated KPIs we can design for ourselves, but one of the key ones was, "I want to know what the business thinks of us, and whether we are trending in the right direction."
Trumping the frustration
The reason that’s a good one for is that no amount of being a good guy, being nice to people, or being friendly in meetings, is going to trump the frustration a business person feels if the application is not doing what they need it to do. Either it's got too many defects, it takes too long to enhance it, or it’s too painful to get anything done, etc. There are a host of things.
So, we designed a relatively simple customer survey. It was something we executed, probably biannually, and that became one of the ways we tracked how we were trending. Are we going in the right direction? There are endless, complex KPIs, but that’s a simple one I would pluck out as being a way of simply tracking, "Are we getting better or worse, or are we just sort of treading water?"
Gardner: And, Mark, when we look at how progress has been made, we need not only look at the end-user perceptions and results from surveys, but perhaps we also need to look at the development team, the ops team, and the actual practitioners here. So, is there a way of gauging success based on what the team does and how well they're able to let go of the legacy mechanisms they've had over the years?
Sarbiewski: We talk about this a lot. We see pressure from the business to change how we do things and the technologies we use. From the business side, you see it in a variety of ways. You see, "Oh, it’s the consumerization of IT, and what I see in my consumer world I want in IT. I see this all moving fast and I don’t feel my business moving." You see that pressure.
But, you absolutely see pressure to change from the bottom-up, from the teams themselves. We want to work in a different way. We want to be able to execute faster. The whole move of agile has been, in large part, if not primarily built, then driven from development and delivery teams up. So, there is a huge motivation there.
You can start to look at some things like that, and as you see improvement, not only in the responsiveness, but the as number of issues go down.
And they are going to look at a variety of things. They are going to look at things, as Brad said, like customer satisfaction as part of that. How quickly does a change and a request get turned around?
That’s a pretty easy metric, because the changes come into systems like the service desk. There's a request. When did that thing get requested, and when did it actually get executed? You can start to look at some things like that, and as you see improvement, not only in the responsiveness, but the as number of issues go down, those are things that the team should be looking at as great measurements.
I often counsel clients to set up some MBOs and some rewards structures around that too, because this is something that the business is going to feel. It’s not just what’s there at release. What did we find here and how is it? It’s really that first 90 or 180 days of use in the business. I'm going to take a snapshot, and if it’s good, if we are constantly improving on that and hitting our targets, that’s where we get our bonuses.
It’s that result. And it shifts it from, I hit my date, I threw it over the wall to ops, and I washed my hands of it. No. We're all in business here. We're not in IT. We're in business, and business means this thing is running like it’s supposed to. That means apps and ops combining and taking a measure at 180 days.
There’s a lot of pride when you see the metrics go in the right way. The feedback that I've seen for our clients that do this really well is where the business comes back and says, "Oh my God. The responsiveness is incredible. Even if I'm not getting the massive stuff that I used to get once every two years, I'm seeing movement on a regular basis, and I love it." And lot of clients that we talk to are really fired up about that.
Those are the kinds of things to strive for, look for, and really have a great feedback loop for on your delivery teams.
Important points
Hipps: There's an important point there. As people know, there are an endless numbers of KPIs that are available to you, and all sorts of people who recommend which ones are the best. We probably didn’t make it this explicit in this version of the book, and maybe this goes in the next revision, but when it comes to how you're going to measure your success, I'd look at a few things in terms of the kinds of measurements you want to track.
First of all, I don’t know that I would pick more than three or four. You find yourself with six, seven, eight different things you are trying to stay on top of and measure, and it becomes its own game. I would keep it as simple as humanly possible. Three is a nice number of measurements.
The second thing is, I would want it to be pretty doggone intuitive what the business value is, if we are doing well in the measurement. I wouldn’t want to have to go through too many mathematical steps to get back to what the value is to the business, as I look at whatever measurement I have chosen to evaluate myself by.
The third aspect would get to what Mark was just saying was in an ideal world at least one of the measurements, if not all of them, and would speak to how well we're working pan IT. It's not just how well we're building the application or how quickly we're getting it pushed into operations hands, but how well we're working together as teams, as developers, as folks on the operation side, as planners, and as enterprise architects.
Mark was talking about looking at meantime from change request to production. Well, that's an example of the entire IT supply chain right there. Presumably, if I set some relatively easy target and start trending in that direction, then I can have at least some sense of satisfaction. We must be getting better about interoperating the way we didn’t operate.
For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.
You have four traits that you point out, predictability, repeatability, quality, and change readiness. Mark, maybe you could drill into these and explain why these proved to be so important for these top players?
Sarbiewski: We've done numerous surveys, there are lots of other surveys out there about what the business is asking of these application teams and of IT in general. For the last couple of years, it's been pretty consistent. Surprisingly, to some degree, agility and innovation are right at the top of the list. Cost is up there in third place in our surveys. So, it's hugely important, but it's not the first thing, and that's almost a little counter-intuitive.
What we hear from our clients is that things are hyper-competitive and that technology, in particular software and applications, is a huge competitive advantage. So, our ability to move fast and beat our competitors to the punch with capability is enormously important.
You turn that around. Suppose I'm an application executive and I own this problem. How am I going to deliver that to the business? I've done all kinds of things to try to make that happen. I've brought in automation. I'm bringing management. I'm outsourcing to drive cost down. I'm adopting new technologies for this rich experience. I'm introducing a whole host of change to meet those business objectives.
Have to deliver
But, at the end of the day, I've to be able to deliver every time. I got to be able to know when I'm going to deliver. That's absolutely critical to it -- to deliver agility. What it means to me as an App owner is that I'm ready to make change. And that's a big statement.
So the change readiness comes in. Have I architected for change? It's not just that my people are ready for it, my processes are good for that, my software itself is changeable, and I have automation where I can make a change and know if I have broken something else.
I'm trying to deliver that innovation and agility for the business. I've introduced a whole host of things to be against that, and I have to manage this in an extraordinary way, in a different way than I've done this in the past. What's going to help me to be able to predict where I'm going to land, repeat this for every project I get, and be change ready and not sacrifice quality.
I've to do all that and keep quality high. Those becomes the North Star principles that I want to keep my team focused on and thinking about how things like being change ready are facilitating the agility that the business wants.
Gardner: Brad, change ready really resonates, nowadays. We've got cloud computing in many people's minds as something important for them to be focused on. We've got mobile computing and how that impacts enterprises and their processes. And we're looking more at sort of the social business with collaboration and rich sharing of data and information.
They're always on. There is nothing I can do in a business that isn't going to touch the application.
So. change readiness seems to be the norm or perhaps am I overstating that?
Hipps: No, I think that's right. Speaking from the application domain, our friends in the agile communities have been the leading champions of this notion for a long time in applications. Our default stand was one of being change averse.
By that, I mean that there was this whole contractual relationship with business. You tell us what you need, and we're going to document it as best as we can, down to having all the semicolons in the right place.
We're going to break out the quill pens and ink our signatures. Forever shall it be, and if you change anything here, we're going to hit you with the request for change, and it will go through a cycle of six weeks and maybe we'll agree to it, etc., etc. The longest time was the mindset. You can look at that and say it's awful, but when I had far fewer applications, and they took far longer to build, it was just the way of the world.
The recognition today for all of the reasons we've talked about in this podcast and others, our applications are everywhere. They're always on. There is nothing I can do in a business that isn't going to touch the application. It fundamentally means, we need to sweep from the table, that notion of being change averse. Instead, we need to be in a position of embracing change. We do need to be change ready.
It's not that the business is going to sit back and say, "You're right. We're sorry. We won't ask for so many changes." This isn't going to happen. From an IT, from an app perspective, we need to be oriented and positioned, rather than see change as something that we need to fear or protect ourselves from. It needs to be something we need to embrace as a fact of life.
The leading traits
As Mark said, we need to be architected and engineered, from our people process technology perspective, to put ourselves in a position to be that way. In the book, we talk a bit about some of the principles we think come into play for change ready organizations. But, that's why it is one of the leading traits, the leading principles, in world-class organizations.
Gardner: Okay, we've talked about an awful lot, and this book encompasses an awful lot. It might be difficult for people to get a handle on where to start, but you've addressed that as well. You've conceptualized this along three lines: think big, start small, and then scale quickly and adapt. Let's go through these. Let’s start with you Brad. Think big -- what does that mean?
Hipps: It could be a mantra of sorts, Think big, start small, scale quickly. The basic idea of think big is the idea that you want to spend some time making sure that you’ve all got a shared vision of where you want to be, and we talk a bit about whether that was a maturity model -- these principles of predictability and repeatability, etc.
Hopefully we've set at least some suggested guidelines for constructing what your end state might look like. But, this point about thinking big is that, as we all know, certainly in IT but probably anywhere, it's every easy to fall into a state of analysis paralysis. We've got to figure out exactly the right metrics to decide exactly what we're going to be. We've got to figure out precisely what our time line is.
We sort of can borrow from our friends in agile, who have said that you've got to understand the perimeter of what it is you want to accomplish, but it's bound to change. Those perimeters are bound to shift. You're bound to discover things about yourselves, your organizations, what's feasible, and what's not in the process of actually trying to get there.
It's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.
So, it's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.
If, you then pluck the low-hanging fruit, the easy things we could do starting this week, starting tomorrow, to advance us at least generally toward these ends, this end objective, that's great. Then, it becomes a matter of just continuing to move, scale, and adapt.
Somewhere, we make the point that, as an application team, certainly at least as an application member, I cared a lot more about measurable progress, seeing things actually advancing and getting better. Then, I cared less about how shiningly brilliant the end-state was going to be or exactly how we were going to get there.
I was far more interested in generally getting a sense of what our North Star was, and then getting going, and actually seeing progress. So that, in a nutshell, is what we mean when we say, think big, start small, scale quickly and adapt.
Gardner: Mark, any further thoughts on this philosophical approach to this issue about the lifecycle of application?
Unconscious sabotage
Sarbiewski: Absolutely. I spent a number of years in a former life doing process change for companies. There were some trade secretes in the firm I worked with. They recognized some unchanging facts that that people can consciously or unconsciously sabotage the greatest plans, any process you want, or any kind of a change.
You have to start with people. It does involve all the people-process-technology in that order, but it's the people considerations. Do we have that shared vision? Who are the skeptics? Where do we think this could go wrong? Are we committed to getting there?
There were some questions we’d as we were embarking on making this change. First of all we said, what project or what pilot -- if we did these changes on it -- would people in the organization say, "If it works for that project, it will work for us as an organization."
So, find that visible pilot project, not one that’s an exception. Don’t find one where there are four developers and they are in the same room. If you try something new, people can say, "Well, of course, it worked for that, but that’s so atypical." So, find that project.
Beyond that, find the champion who is really respected in the organization, but skeptical of the change. We would go looking for one or two people who were open-minded enough to really give it a go, but maybe steeped in how we’ve done it, and have been very successful in how we’ve done it. Then, people can say, "That’s the kind of project we do, so you need to be able to make it work there. If Joe or Mary or whoever it is, if they buy into and it works for them, I believe."
The one other thing I’d say is start thinking about those types of metrics, those cross-silo and lifecycle-oriented goals and metrics.
The one other thing I’d say is start thinking about those types of metrics, those cross-silo and lifecycle-oriented goals and metrics. We talked about ones just a bit ago where we reward our delivery teams after six months of being live. Maybe, let's reward jointly the operations and the dev teams, if they’ve met those customer satisfaction goals, those service level agreements (SLAs), and those low counts of defects in production. You start to create a different dynamic, when you think more about lifecycle goals and cross-team goals.
Gardner: Now, I know these books involve a tremendous amount or work, and it’s something you really have to pour your heart into. Brad, the last question to you. What do you hope happens as a result of this book?
Hipps: The spirit of this book, and probably the spirit of a lot of these kinds of books, is that our hope is that somebody gets this book, and maybe doesn’t read it cover to cover. That’s okay. They pick and choose their places, but they take away one idea that’s actually implementable. If I have one hope, it’s that we haven’t been so pie-in-the-sky in our thinking that somebody reads this and says, "Yeah, nice idea, but it will never happen here."
So, that would be my hope -- somebody takes one single way that’s implementable in the near term within their organization.
Gardner: And in fairness I should offer the same question to you, Mark. What do you hope happens as a result of the book?
Sarbiewski: Do you mean not The New York Times bestseller list. I can’t hope that.
Gardner: Regardless of its reach.
Software is important
Sarbiewski: What I’m hoping is that in these hundred or so odd pages that executives in these enterprises that we're talking to have that opportunity to take just a couple hours and have somebody give them a chance to think about how important software is, and what the true life of an application is.
Once you start to go down that path and you start to say, wait a minute, 10, 15 years of evolving this capability, what does that mean? When things are live and I’ve got hot request from the business to make a change, what needs to happen? How much money will I spend on that?
The one "aha" moment is seeing that the 12 to 15 years matter, when I’m delivering value to the business and innovating for the business. In order to be successful during those 10 to 15 years, I will make different decisions when I build this thing. I will focus on a process.
I will build the automation to a different level, because I’ve stopped thinking that my job is done when I go live. If that’s truly the job, you’ll make a lot of shortcut decisions to get to go live. But, if you think bigger, you think about the full life of an application and what it delivers to the business. All of a sudden, it makes a whole lot more sense to do things a bit differently, to set myself up for 10 years or 15 years of success with the business, as opposed to a moment when I can say, "Yup, I achieved a milestone."
Gardner: Very good, but we have to leave it there. We’ve been examining how our shifting applications and IT landscape provided a huge opening from proving how applications are built, consumed, and managed using new application lifecycle management methods and concepts.
I want to thank our guests, the authors of our book that we’ve been discussing. We’re here with Mark Sarbiewski, Vice President of Marketing for HP Applications. Thanks so much, Mark.
Sarbiewski: Thank you.
Gardner: And also Brad Hipps, Product Marketing Manager for HP Applications. Thanks to you, Brad.
Hipps: Thanks, Dana.
Gardner: This is the last in the series of three podcasts on ALM; we’re examining a new book on the subject, The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle. It offers some powerful methods retaining overall business services delivery improvement. Thanks for joining our series, and we hope you have a chance to get the book and examine it in more detail.
This is Dana Gardner, Principal Analyst at Interarbor Solutions. You've been listening to a sponsored BriefingsDirect podcast. Thanks for listening, and come back next time.
For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.
Transcript of a sponsored BriefingsDirect podcast, the third in a series discussing a new book on ALM and it's goal of helping businesses become change ready. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.
You may also be interested in: