Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.
Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas. We're here on the Discover show floor the week of June 6 to explore some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout this series of HP-sponsored Discover live discussions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]
We’re here now with an HP customer and an aggressive adopter of modern application development techniques, someone who is beginning to span the dev-ops divide and gaining some value from agile development methodologies. Please join me in welcoming Steve Katz, Manager of Software Performance and Quality at Seagate Technology. Welcome.
Steve Katz: Thank you very much, Dana.
Gardner: You know, we’ve heard a lot here about integration and converged infrastructure and we’ve certainly heard in the past from HP about the need for a solid integrated system of record when it comes to application lifecycle management (ALM). At Seagate, what you do and who are you? Then, what’s the problem? What are you trying to solve by adopting some of these newer development methodologies and products?
Katz: First of all, Seagate is one of the largest manufacturers of rotating media hard disks and we also are into the solid state and hybrids. Last quarter, we shipped about 50 million drives. That continues to grow every quarter.
As you can imagine, with that many products -- and we have a large product line and a large supply chain -- the complexities of making that happen, both from a supply chain perspective and also from a business perspective, are very complicated and get more complicated every day.
The Holy Grail for us would definitely be an integrated approach to doing software development that incorporates the development activities, but also all of the test, monitoring, provisioning, and all of the quality checks and balances that we want to have to make sure that our applications meet the needs of our customers.
In the last couple of years, with the explosion with cloud, with the jump to virtual machines (VMs), virtualization of your data center, and also global operations, global development teams, new protocols, and new applications, most of what we do, rather than developing from scratch, is integrate other people’s third-party applications to meet our needs. That brings to the table a whole new litany of challenges, because one vendor’s Web 2.0 protocol standard is completely different than another vendor’s Web 2.0 protocol standard. Those are all the challenges.
Also, we're adopting, and have been adopting, more of the agile techniques, because we can deliver quanta of capability and performance at different intervals, so we can start small, get bigger, and keep adding more functionality. Basically, it lets us deliver more, more quickly, but also gives us the room to grow and be able to adapt to the changing customer needs, because in the market, things change every day.
So for us, our goal has been the ability to get all those things together early in the program and have a way to collaborate and ultimately have the collaboration platform to be able to get all the different stakeholders’ views and needs at the very beginning of the program, when it’s the cheapest and most effective to do it. We’re not there. I don’t know if anybody will ever be there, but we’ve made a lot of efforts and feel like we’ve made a lot of ground.
Early adoption
The dev-ops perspective has really interested us, and we have been doing some of the early adoption, the early engagement with our customers, in our business projects very early in the game for performance testing.
We get into the project early and we start understanding what the requirements are for performance and don’t just cross our fingers and hope for the best down the road, but really put some hard metrics around what it is the expectations are for performance. What’s the transfer function? What’s the correlation between performance and the infrastructure that need to deliver that performance? Finally, what are the customer needs and how do you measure it?
That’s been a huge boon for us, because it’s helped us script that early in the project and actually look at the unit-level pieces, especially in each different iteration of the agile process. We can break down the performance and do testing to make sure that we’ve optimized that piece of it to be as good as possible.
Now when you add in the needs for VM provisioning, storage, networking, and databasing, the problem starts to mushroom and get more complex. So, for a long time, we've been big users of HP Quality Center (QC), which is what we use to gather requirements, build test plans, and link those requirements to the test plans ultimately to successful tests and defects. We have traceability from what the need of the customer is to our ability to validate that we deliver that need. And, it worked well.
Then, we have the performance testing which was an add-on to that. And now, with the new ALM 11, which by the way, marries the QC functionality and Performance Center functionality. They're not two different things any more. It’s the same thing, and that’s the beauty for us.
Having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.
That’s what we’ve been preaching and trying to work with our project teams on, to say that it’s just a requirement. Any requirement is just a requirement and how we decide to implement, fulfill, and test that is our choice. But, having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.
Gardner: Let’s get a sense of the scale here. How many applications do you have in production and how many at any given time are in your development phases, going from the requirements to development and test?
Katz: The major number of applications we have in production is in the 300-500 range, but as far as mission critical, probably 30. As far as some things that are on everybody’s radar, probably 50 or 60. In Business Servive Management (BSM), we monitor about 50 or 60 applications, we also have the lower-level monitors in place that are looking at infrastructure. Then, our data all goes up to the single pane, so we can get visibility into what the problems are.
The number of things we monitor is less important to us than the actual impact that these particular applications have, not only on the customers experience, but also on our ability to support it. We need to make sure that whatever it is that we do is, first of all, faster. I can’t afford to get a report every morning to see what broke in the last 24 hours. I need to know where the fires are today and what’s happening now, and then we need to have direct traceability out to the operator.
As soon as something goes wrong, the operator gets the information right away and either we’re doing auto-ticketing, or that operator is doing the triage to understand where the root cause is. A lot of that information comes from our dashboards, BSM, and Operations Manager. Then, they know what to do with that issue and who to send it to.
SaaS processes
We’ve subscribed to a number of internal cloud services that are software-as-a-service (SaaS) processes and services. For those kind of things, we need to first make sure it’s not us before we go looking to find out what our software service providers are going to do about the problems. And both of our applications, all the BSM and all the dev-ops has helped us get to that point a little better.
The final piece of the puzzle that we’re trying to implement is the newer BSM and how we get that built into the process as well, because that’s just another piece of the puzzle.
Gardner: As you’re moving towards this adoption of the newer products and binding together dev and ops, what sort of paybacks are you expecting? Is this just allowing the green light to stay on more, where your performance and reliability are strong? Or are there some other benefits in terms of reducing the cycle time for development, agility, and being able to cut costs in some ways?
Katz: It’s two things for us. One is the better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile to react a problem. So the better job we do up front, understand what the requirements are and not just what this application is or what it’s supposed to do, but how is it supposed to affect the rest of our infrastructure, how is it supposed to perform under stress, and what are the critical quality, the quality of service, the quality of experience aspects that we need to look at.
Defining that up front helps us to be better and helps us to develop and launch better products. In in doing that, we find issues earlier in the process, when it’s a lot cheaper to fix them and a lot more effective.
The better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile.
On the back end, we need to be more agile. We need to get information faster and we need to be able to react to that information. So, when there’s a problem, we know about it as soon as possible, and we’re able to reduce our root-cause analysis and time to resolution.
Gardner: You’ve mentioned that you’re being aggressive with SaaS. I imagine you’re increasingly looking at cloud, and then, of course, everyone is thinking about mobile these days as well. Is there something about tying together dev-ops, creating a better ALM capability, that allows you to adopt technologies more rapidly?
Is there a sense of complexity and inertia in adopting some of these things, that you could move to them more rapidly and enjoy some productivities resolved because of what you’ve been doing with ALM?
Katz: I look at that like a baseball team. My kids are in Little League right now. We’re in the playoffs. When a team does well, you get this momentum. Success really feeds momentum, and we’ve had a lot of success with the dev-ops, with pulling in ALM performance management and BSM into our application development lifecycle. Just because of the momentum we've got from that, we’ve got a lot more openness to explore new items, to pull more information into the system, and to get more information into the single pane.
Before we had the success, the philosophy was. "I don’t have time to fix this. I don’t have time to add new great things." Or, "I've got to go fix what I got." But when you get a little bit of that momentum and you get the successes, there is a lot more openness to it and willingness to see what happens. We’ve had HP helping us with. They’re helping us to describe what the next phase of the world looks like.
Gardner: Well, great. We’ve been hearing about adopting more modern and agile development methodologies and adopting some integrated systems of record to do that. We’ve been joined by Steve Katz. He is the Manager of Software Performance and Quality at Seagate Technology. Thanks so much.
Katz: Thanks, Dana. I always appreciate it.
Gardner: And thanks to our audience for joining this special BriefingsDirect podcast coming to you from the HP Discover 2011 Conference in Las Vegas.
I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this series of user experience discussions. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Sponsor: HP.
Transcript of a BriefingsDirect podcast from HP Discover 2011 on how Seagate Technology is leveraging HP's ALM tools to conduct development and dev-ops faster, better and cheaper. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.
You may also be interested in:
- HP delivers applications appliance solutions that leverage converged infrastructure for virtualization, data management
- HP takes plunge on dual cloud bursting: public and-or private apps support comes of age
- HP rolls out EcoPOD modular data center, provides high-density converged infrastructure with extreme energy efficiency
- HP at Discover releases converged infrastructure products and services aimed at helping IT migrate rapidly to the future
- HP's IT Performance Suite empowers IT leaders with unified view into total operations, costs
- HP Delivers NMC 9.1 as New Demands on Network Management Require Secure, Integrated, and Automated Response
No comments:
Post a Comment