Tuesday, June 11, 2013

HP Experts Analyze and Explain the HAVEn Big Data News From HP Discover Conference

Transcript of a BriefingsDirect podcast on how HP's new HAVEn Initiative puts the full power and breadth of big data in the hands of companies.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and how it’s making an impact on people’s lives.

Gardner
Once again, we're focusing on how IT leaders are improving their services' performance to deliver better experiences and payoffs for businesses and end users alike, and this time we're coming to you directly from the HP Discover 2013 Conference in Las Vegas. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We're here in the week of June 10 and we are now joined by our co-host, Chief Evangelist at HP Software, Paul Muller. Welcome, Paul.

Paul Muller: Dana, I'm surprised your voice is holding out after this week.

Gardner: Right, it’s been quite busy. There has been a lot said about big data in the last year and HP has made an announcement for a broader vision for businesses that gained actionable intelligence from literally a universe of potential sources and data types.

We're now joined by two additional HP executives to explore the implication and business values from the HAVEn news at Discover. Please join me now in welcoming our guests. First is Chris Selland, Vice President of Marketing at HP Vertica. Welcome, Chris.

Chris Selland: Thanks Dana, it’s great to be here. It's great to work with you again, Paul, and I'm really looking forward to this.

Gardner: And we're joined by Tom Norton, Vice President for Big Data Technology Services at HP. Welcome, Tom.

Tom Norton: Hello, Dana.

Gardner: Let’s go to Chris first. Fairly recently, only critical data was given this high-falutin' treatment for analysis, warehousing, applying business intelligence (BI) tools, making sure that it was backed up and treated almost as if it were a cherished child.

But almost overnight, the savvy businesses, those who are looking for business results, are more interested in all the data or more information of any kind so that they can run their businesses and find inferences in the areas that they maybe didn’t understand or didn’t even know about.

So what do you think has happened? Why have we moved from this BI-as-sacred ivory tower approach to now more pedestrian?

Competitive issue

Selland: First-and-foremost, it’s really that it’s become a competitive issue. Competitiveness issue might be a better way to say it. Just about every company will pay attention to their customers.

Selland
You can tell senior management that this data is important. We're going to analyze it and give you insights about it, but you start realizing that we have an opportunity to grow our business or we're losing business, because we're not doing a good enough job, or we have an opportunity to do better job with data.

Social media has been the tip of the arrow here, because just about all industries all of a sudden realize that there is all data out there floating around. Our customers are actually talking to each other and talking about us, and what are we doing about that? That’s brought a lot of attention above and beyond the CIO and made this an issue that the CMO, the CFO, the COO, the CEO start to care about.

We’ll drill down on this, as we go through the discussion today. Big data is about far more than social media, but I do think social media gets a lot of the credit for making companies pay a lot more attention. It's, "Wait a minute. There is all this data, and we really need to be doing something with this."

Gardner: Paul Muller, as you travel around the world and speak with businesses and governments, are you seeing a shift in the way that people perceive of data as an asset or have they shifted their thinking about how they want to exploit it?

Muller: At the risk of reaching for the third rail here, which is the kind of a San Francisco West Coast joke, in the conversations that I'm having consistently around the globe, executives, both CIOs, but also non-IT executives, are realizing that big data is probably not the most helpful phrase. It’s not the size of the data that matters, but it’s what you do with it.

Muller
It’s about finding the connections between different data sets to help you improve competitiveness, help you improve efficiency if you are in the public sector, help you to detect fraud pattern. It's about what you do with the data in that connected intelligence that matters.

To make that work, it’s about not just the volume of data. That certainly helps, not having to throw out my data or overly summarize it. Having high-fidelity data absolutely helps, but it’s also the variety of data. Less than 15 percent of what we deal with on a daily basis is in structured form.

Most of the people I meet are still dealing with information in rows and columns, because traditionally that’s what a computer has understood. They’ve not built the unstructured things like video, audio, images, and for that matter social, as Chris just mentioned.

Finally, it’s about timeliness. Nobody wants to might be making tomorrow’s decision with last week’s data, if that makes sense. In other words, with a lot of the decisions we have to make, it’s usually done through a revision mirror, which is not helpful, if you're trying to operate today’s thoughts as well.

Variety of systems

Gardner: Chris, it seems as if we have more interest, more business activities, and more constituencies within businesses looking for inputs that help them make decisions or analysis. But we’ve got a variety of systems. We’ve got relational databases, flat files, and all sorts of social APIs that we can draw on.

How do you make sense of this? Is there a common thread now? Is there a way for us to think about data beyond the traditional IT definition of data, and what does that mean for actually then getting access and managing it?

Selland: To pick up on what Paul was saying. I have a love-hate relationship with the term "big data." The love part is the fact that it really has been adopted. People gravitate to it and are starting to realize that there is something here they need to pay attention to. And that’s not just IT.

It’s funny because if you go to something like Wikipedia and you look for the origins of the term "big data," you’ll actually find that in IT circles, we've been talking about big data for about a dozen years. There are probably five or six different people. There is a discussion on Quora, you can look it up if you are interested in the creation of the term which was about a dozen years ago.

As a matter of fact, this is the problem that Vertica was created to solve. It was that, as this big data thing became real, which it is now, traditional databases would be unable to handle it. So the good news is that there has been a recognition in business circles outside the CIO -- the CMO, the COO, and the CFO -- that has just started to happen in the last 18 to 24 months, in a big way.
The love part is that people are paying attention to big data. The hate part is that it’s much more than “big”.

The love part is that people are paying attention to big data. The hate part is that it’s much more than “big”.

I like the Doug Laney definition of big data. Doug is an analyst who is now at Gartner Group, although when he coined term, he was actually at another firm. He said it is the 3Vs -- volume, velocity, and variety. Volume is a part of it and it’s certainly about big.

But as Paul was just talking about, there is also a tremendous variety these days. We've already talked a little bit about social media, but the fact that people equate "social media" with "big data" is another pet peeve of mine.

Social media is driving big data, but it’s only a very small part of it. But it’s an important part, because it’s what’s brought a lot of that other attention. You're looking at audio, video, and all of this user-created content and such, and there is such a variety. Then, of course, it’s coming in so fast. Then, we’d like to sometimes add the forth V, which is value. How is this all going to make money for me? What do we do about this strategically as a business.

So there is just a lot going on here and this is really what’s driven the HAVEn initiative and the HAVEn strategy. We have this tremendous portfolio of assets here at HP from software to hardware to services and HAVEn is about putting that portfolio behind these different analytic engines – Vertica, IDOL, Logger, and Hadoop - that complement each other and their ability to integrate and build solutions.

Broad strategy

So how do we bring this together under a single broad strategy to help companies and global enterprises get their hands around all of this, because it’s a lot more than big? Big data is great. It’s great that the term is taken off, but it’s a lot bigger than that.

Gardner: All right. Before we go into the HAVEn announcement, I’d like to remind our readers and listeners that there is a lot of information available, if they search online for HP, HAVEn, or HP Discover 2013. But before we go there, let’s go to Tom Norton.

We've been talking about data, big data, the movement and shift in the market, and we also find ourselves talking about platforms and certain types of data format and technologies, but there is more than that. It seems that if we're going to change these organizations so that they use data more effectively, we need to go beyond the technology. Give me an idea from the technology services' perspective of what also needs to be considered when we go about these shifts in the market.

Norton: When you think about a data platform, that’s not new. Both Paul and Chris mentioned that data platforms and data analysis have been around for years, but this is a shift. It is different in a number of ways: We mentioned velocity, volume, and variety, but there is also a demand, as Chris mentioned, to have this access to information faster.

Norton
The traditional systems or platforms that IT is used to providing are now becoming legacy. In other words, they're not providing the type of service level to meet the workload demands of the organization. So IT is faced with the challenge of how to transform that BI environment to more of a data refinement model or a big data ecosystem, if you want to still hang on to big data as a term.

IT is challenged there, and the goal overall is to be able to provide that service level that Paul mentioned to be able to support through timeliness, and the type of actions the business wants to take. So the business is now demanding an action from IT.

The ability to respond quickly to this platform transformation is what we want to help our customers do from our technology services' perspective. How can we speed the maturity or speed the transformation of those traditional BI systems which are more sequential and more structured to be able to deal with the demands of the business to have relevant and refined information available to them at the time they need it, whether it’d be 1.5 seconds or 15 hours.

The business needs the information to be able to compete and IT needs to be able to adapt, to have that kind of flexible, secure, and high-performing platform that can deal with the different complexities of raw data that’s available to them today.

Gardner: Tom, on other programs, we’ve talked about application modernization and application transformation. We're following a similar trajectory with data. We're bringing in more data types, but we don’t necessarily want to assimilate them into a common warehouse or format. We're looking to do integration with the data, do hybrid activities with the data, buy-and-sell data, or barter it. It’s really transformed data.

It used to be that the way data came about was as a refuge from the application. So is the role of services for managing the data continuum and lifecycle similar to what we did with applications over the past 10 years?

Similar to cloud

Norton: I think it's similar It’s actually very similar to cloud in some ways, when you think of a platform which enables a service. When you consider the models that people are looking at today concerning cloud, there is a maturity reality that goes with it. We start with a platform and then you start looking at the service-level catalogs, automation, and security, and then you look at the presentation layers.

Data platforms are exactly the same. You have to take what was the very singular service that was offered and start looking at more complex content. So you have to consider data sources, which could come from many different places. You have to consider data source from a cloud, from a traditional BI system, or from other data sources within the organization.

Acquiring data in that context has to be considered. Then, as was mentioned earlier, you have to consider that processing and the service levels for processing of that raw material to produce refined information that’s useful.

And that’s very similar to when you start thinking about what cloud would do. Like the performance from a presentation perspective of how quickly the environment is able to deliver an app, is very similar in terms of presenting information that can be useful to the business. Then you have to look at the presentation format.
You have to consider data source from a cloud, from a traditional BI system, or from other data sources within the organization.

We've had discussions about mobile users, for example, on how social media not only produces information, but there are expectations from mobile users today of how they can get access to it. Considering that format, it's very similar to what we've done in terms of applications and very similar to the approach that you need to take. When you look at a cloud platform, you have to look at that.

Data is unique in that it is both the platform and the service. It’s slightly different than cloud at least in that way, where you're presenting services from that. Data is unique because there is a specialized platform that needs to be integrated, but you have to consider the information service that’s presented and approach it like you would in application. It’s a really interesting approach and an interesting transformation for IT.

Gardner: Chris Selland, let’s get back to the news of the day of the HAVEn initiative, the HAVEn vision. Tell us in a nutshell what it is, what it includes, and then we can talk about what it means.

Selland: I talked about the tip of the spear before. In this case the tip of the spear are our analytic engines, our analytic platforms, the Vertica Analytics Platform, Autonomy IDOL, ArcSight Logger. HAVEn is about taking this entire HP portfolio and then combining those with the power of Hadoop.

We have been talking about our open partnership. There are a number of Hadoop distributions, and we support them all. It's taking that software platform, running it on HP’s Converged Infrastructure, wrapping HP’s services around it, and then enabling our customers, our consultants of course, our channel partners, our systems integrators, and our resellers to build these next-generation analytic-enabled solutions and big-data analytic enabled solutions that customers need.

I keep talking about big data is in a classic crossing-the-chasm moment -- for those of you who have read the book, and while I don't want to do a primer on the book, it’s basically about when the attention around this topic starts to shift, and of course IT still remains very much at the center, but now it becomes a business-enabler.

Changing the business

It’s when technology starts to change the business, and that’s what’s going on right now. When you're talking to businesspeople, you can't talk about platforms and you can’t talk about speeds and feeds. When you say Hadoop to a businessperson they usually say, "God bless you," these days.

You have to talk about customer analytics. You have to talk about preventing fraud. You have to talk about being able to operationally be more effective, more profitable, and all of those things that drive the business. It really becomes more-and-more a solutions discussion.

HAVEn is the HP platform that provides our customers, our partners, and of course, our consultants, when our customers choose to have us do it for them, the ability to deliver these solutions. They're big-data solutions, analytic-enabled solutions. They're the solutions that companies, organizations, and global enterprises need to take their businesses forward and to make their customers more satisfied to become more profitable. That's what HAVEn is all about, the fundamental story behind the HAVEn initiative.

Gardner: It’s very interesting and fascinating to think about these working in some sort of concert. When I first looked at the announcement and heard the presentations, I thought, "Oh ArcSight. Isn’t that an odd man out? Isn't that an outlier?

Why, in your understanding, would having great insights to all the data from your system be something relevant to alter the data that you're driving from your applications, your outside data sources, your customer interactions, the social media, the whole kit and caboodle. Help me understand better why ArcSight is actually a good partner?
Even though social media has been the tip of the spear here for business attention around big data, it’s much, much bigger than that.

Selland: It really goes back to what I said earlier, that even though social media has been the tip of the spear here for business attention around big data, it’s much, much bigger than that. One of the terms that people are starting to hear now, and you're going to hear a lot more about, is the "Internet of things."

There are various third-party estimates out there that within the next few years, there are going to be about 150 sensors per person worldwide, and that number is going to keep growing. Think about all the things that go on in your car, on a factory floor, in a supply chain.

We tend to think about the fact that everybody is walking around with a computer in their pocket these days, a smartphone, but that’s not just communicating with you. It’s communicating with the network to provide quality of service, to monitor what’s going on, to obviously manage your calls and your downloads, and everything else.

There's so much data flowing around out there. The Logger Engine essentially reads and interprets and connects to all of these different sources, various types of machines, system log files, and real-time data as well. It’s not just about being able to interpret social media. It’s being able to pull in all of these different data types.

As the internet of things grows, and the sensors go everywhere, McKinsey estimates that, just to give a tangible example, a typical jet engine throws off about two terabytes per hour of data. What do you do with all that data? How do you manage that data?

Internet of things

Think about all of our IT systems, all of our physical systems, all of our network systems. Think about all these sensors that are in this Internet of things. It’s becoming huge and the ability to process this data from machines, systems, and log files is a huge, huge part of this.

Gardner: Paul Muller, we understand now that we can bring Hadoop benefits to Autonomy's breadth and depth of information, unstructured information to Vertica, speed and ability to do analytics very rapidly and efficiently to ArcSight with machine and other data. How do you take this out to an enterprise, a C-class group of people, and make them understand that you are, in fact, giving them some tools that really weren’t available before, and certainly weren’t cobbled together in such a way? How do you put this in business terms so they can get just how powerful this really is?

Muller: Dana, did you just say Hadoop?

Gardner: I did.

Muller: Bless you.

Selland: Well played.

Muller: Had to be done, Chris. That’s ultimately the question. Let me just give you an example that we talk about and that I share with people quite frequently, and it usually generates a bit of a smirk. We’ve all been on the telephone and called a company or a public service, where you've been told by the machine that the call will be monitored for quality of service purposes. And I am sure we’re all thinking, "Gosh, if only."

The scary part is that all those calls are recorded. They're not only recorded, but they're recorded digitally. In other words, they're recorded to a computer. Much like the airline example that Chris just gave, almost all of that data is habitually thrown away, unless there is an exception to the rule.
What we're able to do with the HAVEn announcement is combine those concepts into one integrated platform.

If there is a problem with the flight or if there is some complaint about the call that escalates the senior management, they may eventually look at it. But think about how much information, how much valuable insight is thrown away on a daily basis across a company, across the country, across the planet. What we've aimed to do with HAVEn is liberate that information for us to find that connected intelligence.

In order to do that, we get back to this key concept that you need to be able to integrate telemetry from your IT systems. What’s happening inside them today? For example, if somebody to send an email to somebody outside of the company, that typically will spawn a question that asks who they send that email to? Was there an attachment there? Is it a piece of sensitive information or not? Typically that would require a person to look at it.

Finally, it's to be able to correlate patterns of activity that are relevant to think about revenue, earnings, or whatever that might be. What we're able to do with the HAVEn announcement is combine those concepts into one integrated platform. The power of that would be something like in that call center example. We can use autonomy technology to listen to the call, to understand people's emotions, and whether they’ve said, "If you don't solve this problem, I'm never going to buy from you again."

Take that nugget of information, marry that to things like whether they are a high net worth customer, what their spending patterns have been, whether they're socially active, are they more likely to tell people about their bad experience, and correlate that all in real-time to help give you insight. That's the sort of being the HAVEn can do it, and that's a real world application that we're trying to communicate in business.

Norton: I want to echo that. I have one more example of what Paul has just indicated. Take healthcare, for example. We're working with the healthcare providers. There are some three-tier healthcare providers. A major healthcare organization could have as many as 50 different business units. These separate business units have their own requirements for information that they want to feed to hospital systems.

Centralized structure

So you have a centralized organizational IT structure. You have a requirement of a business unit within the organization that has its own processing requirement, and then you have hospital systems that buy and share information with the business unit.

Think about three-tiered structure and you think of some of the component pieces that HAVEn brings to that. You have IT which can manage some of those central systems that becomes that data lake or data repository, collecting years and years of historical healthcare information from the hospital systems, from the business units, but also from the global healthcare environment that could be available globally.

IT provides this ecosystem around the data repository that needs to be secured, and and that data pool needs to be governed.

Then, you combine that with information that's coming publicly and needs to be secured. You have those corner pieces which are natural to the Hadoop distributed system inside that data lake that keeps that repository of healthcare information.

The business unit has a requirement because it wants to be able to feed information to the healthcare providers or the hospital systems, and to collect from them as well. Their expectations of IT is that they may need instant response. They may need a response from a medical provider in seconds, or they may look at reporting on changes in healthcare in certain environmental situations that are creating change in healthcare. So they might get daily reporting or they might have half-day reporting.
That's what's driving IT, because they need that very flexible and responsive data repository.

Within HAVEn, you look at Vertica, to drive that immediate satisfaction of that query that comes from the hospital system. Combine that with Hadoop and combine that with the kind of data-governance models that Autonomy brings. Then, look at security policies around the sensors from patients that are being sent to that hospital system. That combination is a very powerful equation. It's going to enable that business to be very successful in terms of how it handles information and how it produces it.

When we start looking at that integration of those components, that's what's driving IT, because they need that very flexible and responsive data repository that can provide that type of insight that the hospital systems need from that from the business unit that's driving the healthcare IT organization itself.

Those are the fits even in a large enterprise, where you can take that platform and apply it in an industry sense, and it makes complete sense for that industry overall.

Gardner: Chris Selland, I think about what companies, governments, and verticals like healthcare, the leaders and innovators in those areas, can do with this. It could really radically change how they conduct their businesses, not by gut, not by instinct, not by just raw talent, but by empirical evidence that can be then reestablished and retested time after time. It strikes me that it's a fundamentally different value that HP is bringing to the market.

HP has, of course, been a very large company with a long heritage, but are we really stepping outside of the traditional role that HP has played? It sounds as if HP is becoming a business-services company, not a technology services company. Correct me if I'm wrong.

Bridging the gap

Selland: Yes and no. First of all, we do need to acknowledge that there is a need to bridge the gap between the IT organization and the business organization, and enable them to talk the same language and solve problems together.

First of all, IT has to become more of an enabler. Second, and I mentioned this earlier and I really want to play this up, it's absolutely an opportunity for our partners. HP has a number of assets, but one of our greatest assets is HP's partner network -- our partner ecosystem, our global systems integrators, our technology partners, even our services providers, our training providers, all of the companies that work in and around the global HP.

We can't know every nuance of every business at HP. So the HAVEn initiative is very much about enabling our partners to create the solutions we're creating. We're using our own platform to create solutions for the core audiences that we serve, which in many cases, are things like IT management solutions or security solutions which are being featured and will continue to be featured.

We're going to need to get into all of these different nuances of all of these different industries. How do these companies and organizations compete with each other in particular verticals? We can’t possibly know all of that. So we're very reliant on our partners.

The great news is we have, we have what I believe, is the world's greatest partner network and this is very much about enabling those partners and those solutions. In many cases, those solutions will be delivered by partners and that’s what the solutions are all about as well.
We have what I believe, is the world's greatest partner network and this is very much about enabling those partners and those solutions.

Gardner: Just to drill down on that a bit, if there are these technologies that are available to these ecosystems within verticals and attacking different business problems, what's the next step with HAVEn? Now that we put together the various platforms, given the whole is greater than the sum of the parts in terms of a business value, what's the vision beyond that to making these usable, exploitable?

Are there APIs and tools or is that something also that you are going to look to the partners for, or both? How does it work in terms of the go to market?

Selland: There absolutely are APIs and tools. We need to prime the pump, to some degree, with building and creating some of our own solutions to show what can be done in the markets we serve, which we're doing, and we also we have partners on board already.

If you look at the HAVEn announcements, you'll see partners like Avnet and Accenture and other partners that are already adopting and building HAVEn-based solutions. In many cases, we've started delivering to customers already.

It's really a matter of showing what can be done, building what can be built, and delivering them. I mentioned earlier the crossing-the-chasm moment we're having. The other thing that happens, when you get into this market, is you're moving from its being purely a CIO decision to where the business starts getting involved.

Great ROI

There is great return on investment (ROI), there's this big data analytic solution we're going to enable, and we are going to build to deliver better customer loyalty. We are going to better customer retention and lower churn. The first thing I need to say is, "Okay, show me the numbers, show me the money." Those are Jerry Maguire terms, and the best way to do that is show examples of other companies that have done it.

So you run into a situation where you need to be able to show who is doing it, how they're doing it, and how they're making money with it. You've got to get that early momentum, but we're already in the process of getting it, and we've already got partners on board. So we're really excited.

Gardner: Tom Norton, what are your thoughts about my observation that this takes HP to a different plane in terms of the level of value it can bring to a business, and then perhaps some additional thoughts based on what Chris said in terms of how this fits into a value chain?

Norton: You can take two separate perspectives, but you can't separate them. In order for my group, TS, to be able to help IT transform, IT has to be aligned to that business decision anyway, or they have to be aligned to the business requirements and the workloads that business may be presenting.

For me to help to build an integration plan or to build a design for a data platform like this transformation of a data platform, I have to have some idea of what the workload requirements may be from the business. I have to know if the business is trying to do something that's going to require an immediate type of satisfaction, or they are going to do something that can be done in more of a batch format.
I have to have some idea of what the workload requirements may be from the business.

Those expectations of a business in terms of when they want to be presented with that business aligned information, that's going to determine short term and midterm what IT needs to do.

You can't separate those two, especially when we're starting to drive and accelerate the kind of format and the kind of workloads that businesses may need. You may get requirements from 20 different businesses and each business may have 10 different business requirements that they have in terms of the presentation of information.

So how can we get to the point where we can separate from the business, the view of what IT is doing? The business shouldn't need to know about Hadoop, as Chris mentioned earlier. They shouldn't need to know how Hadoop is integrated with Vertica, integrated with Autonomy, or how the three are combined and secured, but they should have an expectation that they're going to get the information that they need at the time they need it.

We really can't design a platform, unless we know that spectrum, and how we can create a road map for how to resolve that and how to mature it. So we have to know that, and the second part is going to be, as you've mentioned before, from how the business needs to access it.

Flexible technology

If the business is going to a more distributed, a remote, or a mobile type of workforce or mobile access, our design requirements for IT have to be for the infrastructure. The technology has to be flexible enough to deliver information to those consumption formats.

If you're dealing with finance, for example, and you're going to have a sales force selling capital investments to their largest investors, a $100 million a year investors, the expectation of those salespeople to that investment model is that they can provide their customers -- probably the most important customers that that finance organization has -- information within 15-30 minutes. That's the time that the salesperson is talking to them about what may be happening with their portfolio.

Think about how complex that can be. You have to access social media, as was brought up earlier, and be able to get information on Twitter feed so that they can provide a meaning-based analysis on how this stock portfolio is being reflected in the market.

To get that in that time frame of 0-30 minutes requires a different design, than someone who is going to look at market reporting trends over a 24-hour period and present that each morning. So it’s very important that we have that alignment between technology and business, and unless we can understand both, we're not going to be able to drive that road map in the direction that's going to satisfy the business requirements.

Gardner: Paul Muller, when we think about the value to the business, and we recognize that IT is in the middle between when data is analyzed and inferences are gathered, acting on those inferences and putting them into place perhaps goes back in through IT.
It seems to me that HP is in a unique situation now by pulling together these different data analysis types.

There are applications that need to be addressed. There are mobile devices that need to be reached. It seems to me that HP is in a unique situation now by pulling together these different data analysis types, making it available in a holistic context, but also being a provider of the means to then be actionable, to create applications, to populate applications, and to allow IT to be the traffic cop on this two-way street or multi-way street.

Tell me how HP is differentiated. Given what we've now seen with the HP Discover announcements with cloud, with converged infrastructure and with HAVEn, give us a bit more of an understanding of how HP is uniquely positioned?

Muller: Dana, you made such a great point. Insight without action is a bit like saying that you have a strategy without execution. In other words, it’s pretty close to hallucination, right?

The ability to take that insight and then reflect that into your business rapidly is critical. I have a point of view that says that almost every enterprise is defined by software these days. In other words, when you make an insight and you want to make a change, you're changing the size. If you are Mercedes, you're changing one of the 100 million lines of code in your typical S class. Some of the major based around the planet now hire more programmers than Microsoft has working on Windows today.

Most companies are defined by software. So when they do get in an insight, they need to rapidly reflect that insight in the form of a new application or a new service, it’s typically going to require IT.

Absolutely critical

Your ability to quickly take that insight and turn that into something a customer can see, touch, and smell is absolutely critical, and using technique like Agile delivery, increasing automation levels, DevOps approaches, are all critical to being able to execute to get to that.

I would like to come back up to Chris’ response to just touch on a conversation I had with a CIO last week, where he said to me, "Paul, my problem is actually not about big data. It’s great, and we’ve got it, but I still can’t work out what to do with it. We should have a conversation about innovation in the profits of big data." So, Chris, do you want to maybe take Dana’s question?

Selland: It’s really, first of all, our focus. It's not just big data, but helping our customers be successful in leveraging big data is a core focus and a core pillar of HP strategy. So first of all it’s focus.

Second of all, it’s breadth. I talked about this earlier, so I don’t want to repeat myself too much. The software, hardware, and converged cloud assets, capabilities of services, and of course their service’s portfolio -- all of the resources that the global HP brings to bear -- are focused on big data.

And it’s also the uniqueness. Obviously, being an HP Software Executive, I'm most familiar with the software. If you really look at it, nobody, none of HP’s competitors, has anything like Vertica. None of HP’s competitors have anything like IDOL. None of HP’s competitors has anything like ArcSight Logger. None of HP's competitors has the ability to bring those assets together and get them interoperating with each other and get them solving problems and building solutions.
Your ability to quickly take that insight and turn that into something a customer can see, touch, and smell is absolutely critical.

Then, you take our partner channel, wrap it around that, and you combine it with the power of open-source industry initiatives like Hadoop. HP has very much openness of the core of everything we're doing. We have all sorts of partners helping and supporting us around here.

I haven’t even talked about technology partners, BI, or visualization partners. We're partnering with all of the major Hadoop distribution. So there is just tremendous breadth and depth of resources focused on the problem. At the end of the day, it really is about execution, because that’s the other thing that I talked about earlier, customers. They want to hear big ideas and they want to know how technology helps them get there, but they also want to see proof points.

Muller: Let’s just start from that. Chris, maybe we'll finish on a slightly controversial note here, but it’s worth talking about. Then, maybe this is potentially a good segue to Tom. I met with a CIO again. I was speaking to some of our listeners and met with some CIOs in South Africa a couple of weeks back. This head of manufacturing turned to me and said, "You know, Paul, I understand big data technology is there, I understand. I can pretty much ingest this. At least the potential is there that I can.

"What I'm not sure is, in my industry, how does it matter to me? Don’t just talk to me about technology. How can I turn that into a justifiable business case that the business will want to invest in?" And it kind of struck me that the technology in some respect is slightly ahead of our customer’s ability to think of themselves as innovators rather than as infrastructure managers.

Part of the problem

Selland: You certainly just defined part of the problem. There is no one-size-fits-all big-data-in-a-box solution, because the answer to that question is something that you really need to have a significant understanding of the business and it’s really a consultative question, right?

You’ve got to have a broad enough portfolio to know that you’ve got the confidence and the assets to eventually solve the problem, but at the same time start with understanding the problem, the industry, and solutions. This is where our service is, and this is where our partner ecosystem comes into play. And having the breadth of the portfolio of software/hardware and cloud services to be able to deliver on it is really what’s it’s all about, but there is no one-size-fits-all answer to the question we just asked.

Gardner: Tom Norton, when we think about the observation that the technology is getting a bit out in front of what the businesses understand they can do with it, it sounds like a really good opportunity for a technology consultant and a technology services organization to come in. It sounds as if you have to bring together disparate parts of companies.

We talked about developers. If the people are allowing for analytics to develop wonderful insights, but they’ve never really dealt with the App Dev people, and the App Dev people have never really dealt with the BI people, what do we need to do to try to bring them together? In your company, how would you go about bringing them together so that as insights develop, new ways of delivering those insights to more people and more situations are possible? I guess we're talking about cultural shifts here?
There is no one-size-fits-all big-data-in-a-box solution.

Norton: HP actually has, from a services' perspective, a unique approach to this. You've seen it before in the cloud and you've seen it before in the days of IT transformation, where we started looking at that transformation experience.

HP has developed these workshops over time. They bring IT together with the business to help IT build a plan for how it's going to address the business needs and pull out from the business what the business requirements of IT will be.

It’s no different, now that we're in the data world. Through our services' groups within HP, we have the ability from an information management and analytics approach to work with companies to understand the business value that they're trying to drive with information, and ideally try to understand what data is available to them today that is going to provide that business aligned information.

Through the Big Data Discovery Experience workshops, we're able to ask, "What is the business I am capable of doing with the data they have available to them today, and how can that be enhanced with alternative data sources that may fall outside of the organization today?"

As we mentioned earlier, it’s that idea of what can be done. What's the art of the possible here that is going to provide value to the organization? Through services we can take that all the way down, then say, now once you have got the idea, that says I’ve got a road map for analytical value and the management of the information that we have, and we could have made available to the businesses.

Then, you can align that, as I mentioned before, through IT strategies where you do the same thing. You align the business to IT and ask how IT is going to be able to enable those actions that the business wants to take on that information.

Entire lifecycle

So there's an entire lifecycle of raw material data to business-aligned and business-valued information through a service’s approach, through a consultative approach, that HP is able to bring to our customers.

That’s unique, because we have the ability through that upfront strategy from business value of information to the collection and refinement of raw materials and meeting in the middle in this big data ecosystem. HP can supply that from end to end, all the way from software to hardware to services, very unique.

Muller: I’ve got to summarize this by saying that the great part about HAVEn is that you can pretty much answer any question you could think of. The challenge is whether you can think of smart questions to ask.

Gardner: I think that’s exactly the position that businesses want to be in -- to be able to think about what the questions are to then propel their businesses forward.

Selland: Let me give you a tangible example that I was reading about not long ago in The Wall Street Journal. They were talking about how the airline industry is starting to pay attention to social media. Paul talked before about intersections. What do we mean by intersections?
The great part about HAVEn is that you can pretty much answer any question you could think of. The challenge is whether you can think of smart questions to ask.

This article in The Wall Street Journal was talking about how airlines are starting to pay attention to social media, because customers are tweeting when they're stuck at the airport. My flight is delayed, and I am upset. I'm going to be late to go visit my grandmother -- or something like that.

So somebody tweets. Paul tweets "I'm stuck at the airport, my flight is delayed and I am going to be late to grandma’s house." What can you really do about that besides respond back and say, "Oh, I'm sorry. Maybe I can offer you a discount next time," or something like that? But it doesn’t do anything to solve the problem.

Think about the airline industry, customer loyalty programs or frequent-flyer programs. Frequent-flyer programs were among the first customer loyalty problems. They have all this traditional data, as well which some might call customer relationship management (CRM). In the airline industry, they call it reservation systems.

I gave the example before about a jet engine throwing off two terabytes of data per hour. By the way, on any flight that I'm on, I want that to be pretty boring data that just says all systems are go, because that’s what you want.

At the same time, you don’t want to throw it away, because what if there are blips, or what if there are trends? What if I can figure out a way to use that to do a better job of doing predictive maintenance on my jets?

Better job

By doing a better job of predictive maintenance on my jets, I keep my flights on time. By keeping my flights on time, then I do a better job of keeping my customers satisfied. By keeping my customers more satisfied, I keep them more loyal. By keeping my customers more loyal, I make more money.

So all of this stuff starts to come together. You think about the fact there is a relationship between these two terabytes per hour of sensor data that’s coming off the sensors on the engine, and the upset customers, and social media tweeting in the airport. But if you look at the stuff in a stove-piped fashion, we don’t get any of that.

That’s just one example, and I use that example, because most of us are businesspeople and get stuck in airports from time-to-time. We can all relate to it, but there’s a variant of that kind of example in any and every industry.

How do we start to bring this stuff together? This stuff does not sit in a single database and it’s not a single type of structure and it’s coming in all over the place. How do I make sense of it?

As Paul said very well, ask smart questions, figure out the big picture, and ultimately make my organization more successful, more competitive, and really get to the results I want to get to. But really, it’s a much, much bigger set of questions than just "My database is getting really big. Yesterday, I had this many terabytes and I am adding more terabytes a day." It’s a lot bigger than that.
HAVEn gives us that platform model, which is scalable, flexible, secure, and integrated.

We need to think bigger and you need to work with an organization that has the breadth of resources and the breadth not just inside the organization but within our partnerships to be able to do that. HP has got the unmatched capability to do that, in my view, and that’s why this HAVEn initiative is so very exciting and why we have such great expectations from this.

Gardner: What really jumped out of me in listening to the announcements was that so often in technology we get products and services that allow us to do things faster, better, cheaper, all of which is very important. But what’s quite new here, and different with HAVEn is that we're able to now start enabling organizations to do things they simply could not have done before or in any other way.

It’s really opening up to me a new chapter in business services enablement, both internal services and, external benefits, and external services. So last word to each of quickly on why this HAVEn announcement is something that’s unique and is really more than just a technology announcement. Let’s start quickly with you, Tom Norton.

Norton: I think it’s interesting, because we just talked before about integration. Customers with data as complex as it can be, you need models. HAVEn gives us that platform model, which is scalable, flexible, secure, and integrated. It's what the customers need to be able to react quickly, what IT needs to be able to stay relevant, and what the business needs to know they are going to have a predictable and responsive platform that they can base their analytics on. It’s an answer to a very difficult question and very impactful.

Gardner: Paul Muller, why does this go beyond the faster, better, cheaper variety of announcements?

Fundamental difference

Muller: It’s the ability to bring together a set of technologies that allow you to look at all the data all of the time in real-time. I think that that’s the fundamental difference. As I said, shifting the discussion from why can’t we do it to what do we need to do next is an exciting possibility.

Gardner: Last word to you, Chris Selland, why is this going beyond repaving cow paths and charting new territory?

Selland: I just gave a long answer. So I'll give a short one. It’s really about the future, the competitiveness of the business, and IT becoming an enabler for that. It’s about the CIO, really having a chance to play a key role in driving the strategy of the business, and that’s what all CIOs want to do.
Is this big-data thing real? We think it’s very real and we think you're going to see more-and-more examples.

We have these inflection points in the marketplace, the last one was like 12 years ago, when the whole e-business thing came along. And, while I just used a competitor's tag line, it changed everything. The web did change everything. It forced businesses to adapt, but it also enabled the lot of businesses to change how they do business, and they did.

Now, we're at another one, a very critical inflection point. It really does change everything, and there is still some skepticism out there. Is this big-data thing real? We think it’s very real and we think you're going to see more-and-more examples. We're working with customers today or showing some of those examples how it really does change everything.

Gardner: Great. I am afraid we'll have to leave it there. We've been exploring the vision and implications of the HAVEn news that’s been delivered here at Discover and we are learning more about HP strategy for businesses to gain actionable intelligence from a universe of sources and data types. So if you want more information on HAVEn, you can find it online by searching under HP Discover 2013 or HP HAVEn.

I'd like to now wrap up by thanking our co-host, Chief Evangelist at HP Software, Paul Muller. Thanks again so much, Paul.

Muller: It’s not the size; it’s how you use it, when it comes to big data, mate.

Gardner: Also a big thank you to Chris Selland, Vice President of Marketing at HP Vertica. Thank you, Chris.

Selland: It’s great to be here, thanks.

Gardner: And lastly, a thank you to Tom Norton, Vice President of Big Data Technology Services at HP. Thank you, Tom.

Norton: Thank you very much, Dana; it’s been a pleasure.

Gardner: Great. And also of course the biggest thank to our audience for joining us for this special HP Discover Performance podcast coming to you from the HP Discover 2013 Conference in Las Vegas.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HP sponsored discussion.

Thanks again for listening and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how HP's new HAVEn Initiative puts the power of big data in the hands of companies. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, May 23, 2013

Ariba Product Roadmap Points to New Value From Cloud Data Analytics, Mobile Support, and Managed Services Procurement

Transcript of a BriefingsDirect podcast on what's ahead for Ariba products and services in helping companies collaborate on procurement, sales, and improving business productivity.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Ariba, an SAP Company.

Dana Gardner: Hello, and welcome to a special BriefingsDirect podcast series coming to you from the 2013 Ariba LIVE Conference in Washington, D.C.

Gardner
We're here in the week of May 6 to explore the latest in collaborative commerce and to learn how innovative companies are tapping into the networked economy. We'll see how they are improving their business productivity and sales, along with building far-reaching relationships with new partners and customers.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I'll be your host throughout the series of Ariba-sponsored BriefingsDirect discussions.

Our next interview focuses on the product and services roadmap and improved strategy insights unveiled by Ariba, an SAP company, at the recent Ariba LIVE Conference.

Here to explain the latest news and offer some forthcoming innovation insights for how Ariba will be addressing its customers’ needs as well as its partners and ecosystem requirements is Chris Haydon, Vice President of Solutions Management for Procurement, Finance, and Network at Ariba. Welcome, Chris. [Disclosure: Ariba, an SAP company, is a sponsor of BriefingsDirect podcasts.]

Chris Haydon: Thank you, Dana. Nice to be with you.

Gardner: Before we look to the future roadmap, maybe we should define the milestones of the current roadmap. Could you characterize where we are now with Ariba in terms of some of the big news here at LIVE?

Haydon: I'm happy to share that. We have some really exciting innovation coming in the near term to Ariba, as you said, in a couple of areas. First, let's talk about Network RFQ or the Spot Buy. We think this is part of the undiscovered country, where, according to The Hackett Group, 40-plus percent of spend is not sourced.

Haydon
By linking this non-sourced spend to the Ariba Network, we think we're going to be able to address a large pain point for our buyers and our sellers. Network RFQ or Spot Buy is a near-term solution that we announced here at LIVE, and we're bringing that forward over the next six months.

The next exciting innovation is at the other end of the process. That’s a solution we call AribaPay. AribaPay is what we think is a game-changing solution that delivers rich remittance and invoice information that’s only available from the Ariba Network through solution secure, global payment infrastructure.

And I'll talk a little bit later about some of the more exciting things we're doing in services procurement.

Down market

Gardner: It seems to me, Chris, that, in a sense, you're going to the mid-market. You're creating some services with Spot Buy that help people in their ad-hoc, low-volume purchasing.

You're providing more services types of purchasing capabilities, maybe for those mid-market organizations or different kinds of companies like services-oriented companies. And, you're also connecting via Dell Boomi to QuickBooks, which is an important asset for how people run small businesses. Am I reading this right? Are we expanding the addressable market here?

Haydon: We are, and that’s an excellent point. We look at it two ways. We're looking to address all commerce. Things like the Spot Buy, AribaPay, services, procurement, and estimate-based services are really addressing the breadth of spend, and that applies at the upper end and the lower end.

There are important pieces that you touched on, especially with our Dell Boomi partnership and the announcement here for QuickBooks. We want to make it accessible to grow the ecosystem and to make the collaboration across the network as frictionless as possible.

With Dell Boomi announcing QuickBooks, it enables suppliers specifically with that back-end system to be able to comply with all the collaboration of business processes on the Ariba Network, and we're really only just getting started.
We want our customers on both the buy-side and the sell-side of their partners to make their own choices.

There is a massive ecosystem out there with QuickBooks, but when we have a look around, there are more than 120 prominent backend systems. So it's not just the SAPs, the Oracles, the JD Edwards, and Lawsons. It's the QuickBooks and the Intuits. It's the Great Plains of the world.

Gardner: So for those who may have thought that procurement vis-à-vis the cloud, procurement vis-à-vis on-premises back-end business application system, was just for the larger enterprises, it's not so.

Haydon: Not so at all. That’s what we really want to think about. Think about at it as back-end agnostic. We want our customers on both the buy-side and the sell-side of their partners to make their own choices. It's really their own choice of deployment.

If they want to take an integrated business-to-business (B2B) channel, they can. If they want to come to a portal, they can. If they want to have an extract that goes into their own customized system, they can do that as well, or all of the above at the same time, and really just taking that process forward.

Gardner: We've seen a larger market. We're also seeing the notion of one-stop shopping with Ariba, because with AribaPay people can now begin to execute payments through the same cloud, through which they would organize and manage procurement and spend. Tell me how that works? Is this a credit card, a debit card? Is this a transactional banking interface? What does AribaPay really mean?

Brand new

Haydon: Number one, it's brand-new. First, let's talk about the problems that we had, and how we think we are going to address it. More than 40 percent of payments in corporate America are still check based. Check-based payments present their own problems, not just for the buyers, but also from the sellers. They don’t know when they're going to get paid. And when they are getting paid, how do they reconcile what they're actually getting paid for?

AribaPay is a new service. It's not a P-Card. It's leveraging a new type of electronic payment through an ACH-styled channel. It enables buyers to take 100 percent of their payments through the Ariba Network. It lets the suppliers opt in to be able to match and move from our paper-based payment channel check, to an electronic channel that is married. This is the interesting value prop for the network. That is married with their rich information.

So that’s the value. We think it's very differentiated. We're going to be leveraging a large financial institution provider who has great breadth and penetration, not just here in the United States, but globally as well.

Gardner: And that would be the Discover Financial Services?

Haydon: That's correct -- Discover Financial Services. We announced that at LIVE this month, and I know they're as excited as we are,. They have the wherewithal to bring the credibility and the scale to the payments channel, while Ariba has the credibility in the scale of the supply base and the commercial B2B traffic. We think that that one plus one equals three and is a game changer in electronic payments.

Gardner: Moving on to the future or vision that you're painting for the attendees here at LIVE, you've been talking about three buckets: network enhancements -- the Ariba Network -- application enhancements, and infrastructure enhancement. So let's start with network. What should we expect in the roadmap of the next two or three years for the Ariba Network?
The ability to apply your own business rules and logic to those collaborations is massive.

Haydon: We're really excited about the Ariba Network and what we are going to do there. When we think about the network, we've got four or five themes. One piece of big news is that we're getting into and supporting supply chain and logistics processes, and adding that level of collaboration. Today, we have 10 or 11 types of collaborations that you can do on the Ariba Network, like an order, an invoice, and so on.

Over the next several releases, we're going to be more than doubling that amount of collaboration that you can do between trading partners on the network. That’s exciting, and there are things like forecasting and goods receipt notices.

I won’t go into the specifics of every single transaction, but think about of doubling the amount of collaboration that you can do and the visibility in that. The ability to apply your own business rules and logic to those collaborations is massive.

The second thing we're doing on the network is adding a new spend category, which we call services invoicing. This is estimate-based spend and this is another up market, down market, broad approach, in which there are a whole heap of services.

This is more of an estimate-based style spend where you don’t necessarily know the full cost of an item until you finish it. Whether you're drilling an oil well or constructing a building, there are variations there. So we're adding that capability into the network.

User interface

Another area is what we call Network 2.0, and this is extending and changing not just the user interface, but extending and adding more intrinsic core capabilities to the network. Ariba has a number of network assets and we think it's important to have a single network platform globally. It's the commerce internet, the network.

So our Network 2.0 program is a phase delivery of extending the core capabilities of the Ariba network over the next couple of years in terms of order status, results, requests in terms of goods receipt notices, advanced shipping notices, more invoice capability, and just growing that out globally.

Last but not least is just more and more supply collaboration, focusing on the ability for suppliers to more easily respond, comply, and manage their profiles on the Ariba Network.

Gardner: So it's more visibility across these processes across organizational boundaries, more ability to leverage each others' data and to hook up processes, which of course, all means much more efficient business, lower cost, and agility. That services procurement possibility, where you don't have to actually know the end price, but you can start the process, nonetheless, brings in that agility very well. Applications themselves, what should we expect there?

Haydon: We've got a whole raft of capability coming across that whole application suite. We can break that into two or three areas. In our sourcing, contract management, supplier information management, and supply performance management suite, we're doing functionality enhancements on one of the exciting pieces.
We're introducing a new look and feel, a consumer like look and feel, to our catalog and our search engine.

In the spend visibility area, we're going to be leveraging the SAP In-Memory technology HANA. What we are doing there is early for us, but there are some very exciting, encouraging results in terms of the speed and the performance we've heard about from SAP. Running our own technology on that and seeing the results is exciting for us and will be exciting for our customers.

That's one interesting specific area in Spend Vis that we're starting on progressing. And there is good core enhancing in our contract management and our sourcing areas, core-functional rich requirements, and user interface, better integration layers, and just making that whole process more seamless.

As we move more into our procurement suite, we're introducing a new look and feel, a consumer like look and feel, to our catalog and our search engine. The more Amazon-style search touches more users than anyone else. As you can imagine, that’s how they need to requisition tools. So making that a friendly UI and taking that UI or user experience through to the other products is fantastic.

One of the other most exciting areas for us is services procurement, a very large investment for us. Services procurement is our application to be able to support temporary or contingent labor, statement of work or consulting labor, print, marketing and also light industrial. This really is one of the underpinning differences for Ariba, and this is where we're bringing it together.

We're not just building applications any more. We're building network-centric applications or network-aware applications. It means that when we're launching our new services procurement solution, not only are we are going to have a brand-new, refreshed, modern user interface, which is very important.

Differential insights

We're going to be able to leverage the power of the Ariba Network to provide differential insights, into standard day-to-day services procurement on-boarding. That will be looking at average labor rates in the area for the type of service that you're buying and using the network intelligence to give you advice, to give you instruction, to help you manage exceptions on the network.

For example, you want to put in $70 an hour for a rate for a web developer, based on the network intelligence on what like-minded peers are doing. Of course, this is all anonymized and aggregated in the appropriate way, but we're able to say, "You're out of market. It's $75 in this market." So if you put $70, you're not going to be able to do that. That's just one example of the intelligence in the network for services procurement.

Gardner: What’s really interesting to me is all of your vision so tightly aligns with the mega trends of today. There's cloud computing. You talked about the collaboration, the network, and the benefits of that. There's big data. You've talked about the analytics, the ability to bring more data into these processes, across the processes, even across organizational boundaries, rather than to be siloed not only within their own silos, but in each individual company's silos.

Furthermore, the big data trend to me is manifested here by the fact that you're recognizing that data as a definition is shifted. Data used to be an output of an app. The primary data was secondary. We've seen that flip, where the data is the app, and we're able to take the data, use it, and apply it across more processes, and it becomes the app itself. So there's kind of a munge going on and you're certainly on top of that.
We're going to be able to leverage the power of the Ariba Network to provide differential insights, into standard day-to-day services procurement on-boarding.

Lastly, there's mobility, and we haven't talked about that too much, but it seems that your app interfaces, your software-as-a-service (SaaS) and cloud delivery models are taking these processes right down to the individual at the mobile moment, where they are in business, when they need to either spot buy, engage in a service, and then even buy and execute with pay.

What's the last mega trend of the day? Social? You're into that too, because we are seeing more collaboration in the network.

One last mega trend is being able to take this one step further, which is to be proactive and see more insight into processes in business environments. So, analytics, but at a higher level. What should we expect when we look at the resources of SAP, the In-Memory technology improvements with HANA and your being more comprehensive and then expanding addressable market. You're able to bring together tremendous amount of data, and exercising the proper privacy and access controls, start to deliver  strategic insights back to your customers. Tell me little bit about the potential.

Haydon: Absolutely. I don’t think we touched on that. When we think about the networked economy, the networked apps, the network-centric apps, the network itself, one should be able to connect any demand generating or receiving system. We touched on that with Dell Boomi, but it's seamless integration across the piece. We want to be comprehensive, which is adding more collaboration.

Critical mass

The interesting thing about this collaboration, is it starts driving at some levels a critical mass of data. The trend is that the network is intelligent. It's actually able to piece together not just the transaction itself, but who you are. We're quite excited, because this is the massive differentiator of the network. You talked about apps. We have not just the transactional data, but we have the master data, and we can also take other sources of information.

Gardner: Say weather or location?

Haydon: Weather, location, stock reports, SEC filings, Dun and Bradstreet writings, whatever you like, to intersect.

So this data plus knowledge gives you information. With SAP, it's a very exciting technology. SAP InfoNet, Supplier InfoNet, is able to leverage network data. Today, it has over 160 feeds. It's smart, meaning it's smart intelligence. It can automatically take those feeds and contextualize.

And that's the real thing we're trying to do -- knowing who the user is, knowing the business process they are trying to execute, and also knowing what they are trying to achieve. And it's bringing that information to the point of demand to help them make actionable, intelligent, and sometimes predictive decisions.
The trend is that the network is intelligent. It's actually able to piece together not just the transaction itself, but who you are.

Where we would like to go is, heaven forbid there is another tsunami, but let's just work through that use case. You get a news alert there is tsunami in Japan again, terrible event. What if you knew that, and what if 80 percent of your core, raw material inputs came from there? Just that alert of that to notify you to saying you've got to know that you might well have a supply problem. What are you going to do?

And by the way, here are three or four other suppliers who can supply this material to you, and they're available on the network. What is that worth? Immeasurable.

Gardner: I think that's very interesting roadmap for a few more years. I'm interested in coming back next year to Ariba LIVE to learn how we're executing on that.

Clearly, a lot of the trends, as I say, are aligned well with where you are and put some wind in your sails. I also think that Ariba and SAP together are in a catbird seat of being in the right place to extend these values, up and down the supply chain, into new markets, across different aspects of business, like business continuity, and even project and portfolio management, getting to where people are, where they are working through their mobile devices. So congratulations on that.

I'm afraid we'll have to leave it there. We've been learning more about the product and services roadmap and improved strategy from Ariba, an SAP company, here at the Ariba LIVE Conference.

So please join me in thanking our guest, Chris Haydon, Vice President of Solutions Management for Procurement, Finance and Network at Ariba. Thanks, Chris.

Haydon: Thank you.

Gardner: And thanks to our audience for joining this special podcast coming to you from the 2013 Ariba LIVE Conference in Washington D.C.

I'm Dana Gardner; Principal Analyst at Interarbor Solutions, your host throughout this series of Ariba sponsored BriefingsDirect discussions. Thanks again for joining, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: Ariba, an SAP Company.

Transcript of a BriefingsDirect podcast on what's ahead for Ariba products and services in helping companies collaborate on procurement, sales, and improving business productivity. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:

Thursday, May 09, 2013

Thomas Duryea's Journey to Cloud Part 2: Helping Leading Adopters Successfully Solve Cloud Risks

Transcript of a BriefingsDirect discussion on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Gardner
Our latest podcast discussion centers on how a leading Australian IT services provider, Thomas Duryea Consulting, has made a successful journey to cloud computing.

We'll learn how a cloud-of-clouds approach provides new IT services for Thomas Duryea's many Asia-Pacific region customers. Our discussion today continues a three-part series on how Thomas Duryea, or TD, designed, built and commercialized an adaptive cloud infrastructure.

The first part of our series addressed the rationale and business opportunity for TD's cloud-services portfolio, which is built on VMware software. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

This second installment focuses on how a variety of risks associated with cloud adoption and cloud use have been identified and managed by actual users of cloud services.

Learn more about how adopters of cloud computing have effectively reduced the risks of implementing cloud models. Here to share the story on this journey, we're joined once again by Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia.
The question that many organizations keep coming back with is whether they should do cloud computing.

Welcome back, Adam.

Adam Beavis: Thank you, Dana. Pleasure to be here.

Gardner: Adam, we've been talking about cloud computing for years now, and I think it's pretty well established that we can do cloud computing quite well technically. The question that many organizations keep coming back with is whether they should do cloud computing. If there are certain risks, how do they know what risks are important? How do they get through that? What are you in learning so far at TD about risk and how your customers face that?

Beavis: People are becoming more comfortable with the cloud concept as we see cloud becoming more mainstream, but we're seeing two sides to the risks. One is the technical risks, how the applications actually run in the cloud.

Moving off-site

What we're also seeing -- more at a business level -- are concerns like privacy, security, and maintaining service levels. We're seeing that pop up more and more, where the technical validation of the solution gets signed off from the technical team, but then the concerns begin to move up to board level.

We're seeing intense interest in the availability of the data. How do they control that, now that it's been handed off to a service provider? We're starting to see some of those risks coming more and more from the business side.

Gardner: I've categorized some of these risks over the past few years, and I've put them into four basic buckets. One is the legal side, where there are licenses and service-level agreements (SLAs), issues of ownership, and permissions.

The second would be longevity. That is to say, will the service provider be there for the long term? Will they be a fly-by-the-seat-of-the-pants organization? Are they are going to get bought and maybe merged into something else? Those concerns.

The third bucket I put them in is complexity, and that has to do with the actual software, the technology, and the infrastructure. Is it mature? If it's open source, is there a risk for forking? Is there a risk about who owns that software and is that stable?
One of the big things that the legal team was concerned about was what the service level was going to be, and how they could capture that in a contract.

And then last, the long-term concern, which always comes back, is portability. You mentioned that about the data and the applications. We're thinking now, as we move toward more software-defined data centers, that portability would become less of an issue, but it's still top of mind for many of the people I speak with.

So let's go through these, Adam. Let's start with that legal concern. Do you have any organizations that you can reflect on and say, here is how they did it, here is how they have figured out how to manage these license and control of the IP risks?

Beavis: The legal one is interesting. As a case study, there's a not-for-profit organization for which we were doing some initial assessment work, where we validated the technical risk and evaluated how we were going to access the data once the information was in a cloud. We went through that process, and that went fine, but obviously it then went up to the legal team.

One of the big things that the legal team was concerned about was what the service level agreeement was going to be, and how they could capture that in a contract. Obviously, we have standard SLAs, and being a smaller provider, we're flexible with some of those service levels to meet their needs.

But the one that they really started to get concerned about was data availability ... if something were to go wrong with the organization. It probably jumps into longevity a little bit there. What if something went wrong and the organization vanished overnight? What would happen with their data?

Escrow clause

That's where we see legal teams getting involved and starting to put in things like the escrow clause, similar to what we had with software as a service (SaaS) for a long time. We're starting to see organizations' legal firms focus on doing these, and not just for SaaS -- but infrastructure as a service (IaaS) as well. It provides a way for user organizations to access their data if provider organizations like TD were to go down.

Beavis
So that's one that we're seeing at the legal level. Around the terms and conditions, once again being a small service provider, we have a little more flexibility in what we can provide to the organizations on those.

Once our legal team sits down and agrees on what they're looking for and what we can do for them, we're able to make changes. With larger organizations, where SLAs are often set in stone, there's no flexibility about making modifications to those contracts to suit the customer.

Gardner: Let's pause here for a second and learn more about TD for those listeners who might be new to our series. Tell us about your organization, how big you are, and who your customers are, and then we'll get back into some of these risks issues and how they have been managed.

Beavis: Traditionally, we came from a system-integrator background, based on the east coast of Australia -- Melbourne and Sydney. The organization has been around for 12 years and had a huge amount of success in that infrastructure services arena, initially with VMware.
Being a small service provider, we have a little more flexibility in what we can provide to the organizations.

Other companies heavily expanded into the enterprise information systems area. We still have a large focus on infrastructure, and more recently, cloud. We've had a lot of success with the cloud, mainly because we can combine that with a managed services.

We go to market with cloud. It's not just a platform where people come and dump data or an application. A lot of the customers that come into our cloud have some sort of managed service on top of that, and that's where we're starting to have a lot of success.

As we spoke about in part one, our customers drove us to start building a cloud platform. They can see the benefits of cloud, but they also wanted to ensure that for the cloud they were moving to, they had an organization that could support them beyond the infrastructure.

That might be looking after their operating systems, looking after some of their applications such as Citrix, etc. that we specialize in, looking after their Microsoft Exchange servers, once they move it to the cloud and then attaching those applications. That's where we are. That's the cloud at the moment.

Gardner: Just quickly revisiting those legal issues, are you finding that this requires collaboration and flexibility from both parties, learning the road that assuages risks for one party, but protects the other? Is this a back and forth activity? This surely requires some agility, but also some openness. Tell me about the culture at TD that allows you to do that well.

Personality types

Beavis: It does, because we're dealing with different personality types. The technical teams understand cloud and some love it and push for it. But once you get up to that corporate board level, the business level, some of the people up there may not understand cloud -- and might perceive it as more of a risk.

Once again, that's where that flexibility of a company like TD comes in. Our culture has always been "customers first," and we build the business around the longevity of their licenses. That's one of the core, underlying values of TD.

We make sure that we work with customers, so they are comfortable. If someone in the business at that level isn't happy, and we think it might have been the contract, we'll work with them. Our legal team will work with them to make sure we can iron that out, so that when they move across to cloud, everybody is comfortable with what the terms and conditions are.

Gardner: Moving toward this issue of longevity -- I suppose stability is another way to look at it -- is there something about the platform and industry-standard decisions that you've made that helps your customers feel more comfortable? Do they see less risk because, even though your organization is one organization, the infrastructure, is broader, and there's some stability about that that comes to the table?

Beavis: Definitely. Partnering with VMware was one of our core decisions, because their platform everywhere is end-to-end standard VMware. It really gives us an advantage when addressing that risk if organizations ask what happens if our company doesn't run or they're not happy with the service.
It's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS.

The great thing is that within our environment -- and it's one part of VMware’s vision -- you can then pick up those applications, and move them to another VMware cloud provider. Thank heaven, we haven't had that happen, and we intend it not to happen. But, for organizations to understand that, if something were to go wrong, they can move that to another service provider without having to re-architect those applications or make any major changes. This is one area where we're well getting around that longevity risk discussion.

Gardner: Any examples come to mind of organizations that have come to you with that sort of a question? Is there any sort of an example we can provide for how they were reducing the risk in their own minds, once they understood that extensibility of the standard platform?

Beavis: Once again, it was a not-for-profit organization recently where that happened. We documented the platform. We then gave them the advice of the escrow organizations, where they would have an end-to-end process. If something were to happen to TD, they would have an end-to-end process of how they would get their data, and have it restored on another cloud provider -- all running on common VMware infrastructure.

That made them more comfortable with what we were offering, the fact that there was a way out that that would not disappear. As I said, it's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS and cloud hosting.

Gardner: Now the converse of that would be that some of your customers who have been dabbling in cloud infrastructure, perhaps open-source frameworks of some kind, or maybe they have been integrating their own components of open-source available software, licensed software. What have you found when it comes to their sense of risk, and how does that compare to what we just described in terms of having stability and longevity?

More comfortable

Beavis: Especially in Australia, we probably have 85 percent to 90 percent of organizations with some sort of VMware in their data center. They no doubt seem to be more comfortable gravitating to some providers that are running familiar platforms, with teams familiar with VMware. They're more comfortable that we, as a service provider, are running a platform that they're used to.

We'll probably talk about the hybrid cloud a bit later on, but that ability for them to still maintain control in a familiar environment, while running some applications across in the TD cloud, is something that is becoming quite welcoming within organizations. So there's no doubt that choosing a common platform that they're used to working on is giving them confidence to start to move to the cloud.

Gardner: Do you have any examples of organizations that may have been concerned about platforms or code forking -- or of not having control of the maturity around the platform? Are there any real-life situations where the choice had to be made, weighing the pros and cons, but then coming down on the side of the established and understood platform?

Beavis: More organizations aren’t promoting what their platform is, and we’re not quite sure that it could be built on OpenStack or other platforms. We're not quite sure what they're running underneath.

We've had some customers say that some service providers aren’t revealing exactly what their platform is, and that was a concern to them. So it's not directed to any other platforms, but there's no doubt that some customers still want to understand what the underlying infrastructure is, and I think that will remain for quite a while.
As they are moving into cloud for the first time, people do want to know what that platform sitting there underneath is.

At the moment, as they are moving into cloud for the first time, people do want to know what that platform underneath is.

It also comes down to knowing where the data is going to sit as well. That's probably the big one we’re seeing more and more. That's been a bit of a surprise to me, the concerns people certainly have around things like data sovereignty and the Patriot Act. People are quite concerned about that, mainly because their legal teams are dictating to them where the data must reside. That can be anything from being state based or country based, where the data cannot leave the region that's been specified.

Gardner: I suppose this is a good segue into this notion of how to make your data, applications, and the configuration metadata portable across different organizations, based on some kind of a standard or definition. How does that work? What are the ways in which organizations are asking for and getting risk reduction around this concept of portability?

Beavis: Once again, it's about having a common way that the data can move across. The basics come into that hybrid-cloud model initially, like how people are getting things out. One of the things that we see more and more is that it's not as simple as people moving legacy applications and things up to the cloud.

To reduce that risk, we're doing a cloud-readiness assessment, where we come in and assess what the organization has, what their environment looks like, and what's happening within the environment, running things like the vCenter Operations tools from VMware to right-size those environments to be ready for the cloud.

Old data

We’re seeing a lot of that, because there's no point moving a ton of data out there, and putting it on live platforms that are going to cost quite a bit of money, if it's two or four years old. We’re seeing a lot of solution architects out there setting those environments before they move up.

Gardner: Is there a confluence between portability and what organizations are doing with disaster recovery (DR)? Maybe they're mirroring data and/or infrastructure and applications for purposes of business continuity and then are able to say, "This reduces our risk, because not only do we have better DR and business continuity benefits, but we’re also setting the stage for us to be able to move this where we want, when we want."

They can create a hybrid model, where they can pick and choose on-premises, versus a variety of other cloud providers, and even decide on those geographic or compliance issues as to where they actually physically place the data. That's a big question, but the issue is business continuity, as part of this movement toward a lower risk, how does that pan out?

Beavis: That's actually one of the biggest movements that we’re seeing at the moment. Organizations, when they refresh their infrastructure, don’t see the the value refreshing DR on-premise. Let the first step cloud be "let's move the DR out to the cloud, and replicate from on-premises out into our cloud."

Then, as you said, we have the advantage to start to do things like IaaS testing, understanding how those applications are going to work in the cloud, tweak them, get the performance right, and do that with little risk to the business. Obviously, the production machine will continue to run on-premises, while we're testing snapshots.
DR is still the number one use case that we're seeing people move to the cloud.

It's a good way to put a live snapshot of that environment, and how it’s going to perform in the cloud, how your users are going to access it, bandwidth, and all that type of stuff that you need to do before starting to run up. DR is still the number one use case that we’re seeing people move to the cloud.

Gardner: As we go through each of these risks, and I hear you relating how your customers and TD, your own organization, have reacted to them, it seems to me that, as we move toward this software-defined data center, where we can move from the physical hardware and the physical facilities, and move things around in functional blocks, this really solves a lot of these risk issues.

You can manage your legal, your SLAs, and your licenses better when you know that you can pick and choose the location. That longevity issue is solved, when you know you can move the entire block, even if it's under escrow, or whatever. Complexity and fear about forking or immaturity of the infrastructure itself can be mitigated, when you know that you can pick and choose, and that it's highly portable.

It's a round-about way of getting to the point of this whole notion of software-defined data center. Is that really at heart a risk reduction, a future direction, that will mitigate a lot of these issues that are holding people back from adopting cloud more aggressively?

Beavis: From a service provider's perspective it certainly does. The single-pane management window that you can do now, where you can control everything from your network -- the compute and the storage -- certainly reduces risk, rather than needing several tools to do that.

Backup integration

And the other area where the venders are starting to work together is the integration of things like backup, and as we spoke about earlier, DR. Tools are now sitting natively within that VMware stack around the software-defined data center, written to the vSphere API, as we're trying to retrofit products to achieve file-level backups within a virtual data center, within vCloud. Pretty much every day, you wake up there's a new tool that's now supported within that.

From a service provider's perspective it's really reducing the risk and time to market for the new offerings, but from a customer's perspective it's really getting in that experience that they used to. On-premise over a TD cloud, from their perspective, makes it a lot easier for them to start to adopt and consume the cloud.

Gardner: One last chance, Adam, for any examples. Are there any other companies that you would like to bring up that illustrate some of these risk-mitigation approaches that we've been discussing?

Beavis: Another one was a company, a medical organization. It goes back to what we were saying earlier. They had to get a DR project up and running. So they moved that piece to the cloud, and were unsure whether they would ever move any of their production data out. But six months after running DR in the cloud, we just started to provide some capacity.

The next thing was that they had a new project, putting in a new portal for e-learning. They decided for the first time, "We've got the capacity seeing over in the cloud. Let's start to do that." So they’ve started to migrate all their test and dev environment out there, because in their mind they reduced the risk around the up time in the cloud due to the success that had with the DR. They had all the statistics in reporting back on the stability of that environment.

Then, they became comfortable to move the next segment, which was the test and dev environment. And all things are going well. That application will run out of the cloud and will be their first application out there.
We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

That was a company that was very risk averse, and the DR project took a lot of getting across the line in the first case. We'll probably see that, in six to eight months, they're going to be running some of their core applications out of the cloud.

We'll start to see that more and more. The customers’ roadmap to the cloud will move from DR, maybe some test and dev, and new applications. Then, as that refresh comes up to the on-premise, they would be in a situation where they have completed the testing for those applications and feel comfortable to move them out to the cloud.

Gardner: That really sounds like an approach to mitigating risk, when it comes to the cloud, gradual adoption, learn, test, and then reapply.

Beavis: It is, and one of the big advantages we have at TD is the support around a lot of those applications, as people move out -- how Citrix is going to work in the cloud, how Microsoft Exchange is going to work in the cloud, and how their other applications will work. We have the team here that can really make sure we architect or build those apps correctly as they start to move them out.

So a lot of customers are comfortable to have a full-service service provider, rather than just a platform for them to throw everything across.

Gardner: Great. We've been discussing how a leading Australian IT service provider, Thomas Duryea Consulting, has made a successful journey to cloud computing. This sponsored second installment on how a variety of risks associated with cloud adoption have been identified and managed, comes via a three-part series on how TD designed, built and commercialized a vast cloud infrastructure built on VMware.

We've seen how, through a series of use case scenarios, a list of risks has been managed. And we also developed a sense of where risk as a roadmap can be balanced in terms of starting with disaster recovery and then learning from there. I thought there was really an interesting new insight to the market.

So look for the third and final chapter in our series soon, and we'll then explore the paybacks and future benefits that a cloud ecosystem provides for businesses. We'll actually examine the economics that compel cloud adoption.

With that, I’d like to thank our guest Adam Beavis, the General Manager of Cloud Services at Thomas Duryea Consulting in Melbourne, Australia. This was great, Adam. Thanks so much.

Beavis: Absolute pleasure.

Gardner: And of course, I would like to thank you, our audience, for joining as well. This is Dana Gardner, Principal Analyst at Interarbor Solutions.

Thanks again for listening, and don't forget to come back next time for the next BriefingsDirect podcast discussion.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how a stepped approach helps an Australian IT service provider smooth the way to cloud benefits at lower risk for its customers. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:


Monday, April 22, 2013

Service Virtualization Brings Speed Benefit and Lower Costs to TTNET Applications Testing Unit

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Dana Gardner: Hello, and welcome to the next edition of the HP Discover Performance Podcast Series. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your moderator for this ongoing discussion of IT innovation and transformation.

Gardner
Once again we're focusing on how software improvements and advanced HP Service Virtualization (SV) solutions are enabling IT leaders to deliver better experiences and payoffs for businesses and end-users alike.

Today we’re going to learn about how TTNET, the largest internet service provider in Turkey, with six million subscribers, has significantly improved on applications deployment, while cutting costs and time to delivery.

With that, let's join our guest, Hasan Yükselten, Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom, and they're based in Istanbul. Welcome to the show, Hasan.

Hasan Yükselten: Thank you.

Gardner: Before we get into this discussion of how you’ve used SV in your testing, what was the situation there before you became more automated and before you started to use more software tools? What was the process before that?

Yükselten: Before SV, we had to use the other party’s test infrastructures in our test cases. We're the leading ISP company in Turkey. We deploy more than 200 applications per year and we have to provide better and faster services to our customers every week and every month.

Yükselten
We mostly had problems on issues such as the accessibility, authorization, downtime, and private data for reaching the other third-party’s infrastructures. So, we needed virtualization on our test systems and we needed automation for getting fast deployment to make the release time shorter for greater virtualization. And of course, we needed to reduce our cost. So, we decided to solve the problems of the company by implementing SV.

Gardner: What did you do to begin this process of getting closer to a faster and automated approach? Did you do away with scripts? Did you replace them? How did you move from where you were to where you wanted to be?

Yükselten: Before SV, we couldn’t do automation, since the other parties are in discrete locations and it was difficult to reach the other systems. We could automate functional test cases, but for end-to-end test cases, it was impossible to do automation.

First, we implemented SV for virtualizing the other systems, and we put SV between our infrastructure and the third-party infrastructure. We learned the requests and responses and then could use SV instead of the other party infrastructure.

Automation tools

After this, we could also use automation tools. We managed to use automation tools via integrating Unified Functional Testing (UFT) and SV tools, and now we can run automation test cases and end-to-end test cases on SV.

Gardner: Was there anything about this that allowed you to have better collaboration between the developers and the testers. I know that in many companies, this is a linear progression, where they develop and then test, and it can be something that there's not a lot of communication on. Was there anything about what you've done that's improved how developers and testers have been able to coordinate and collaborate?

Yükselten: We started to use SV in our test systems first. When we saw the success, we decided to implement SV for the development systems also. But, we've just implemented SV in the development site, so I can't give results yet. We have to wait and see, for maybe one month, before I can reply to this question.

Gardner: Tell me about the types of applications that you’re using here as a large internet service provider. Are these internal apps for your organization? Are they facing out to the customers for billing, service procurement, and provisioning? Give me a sense of the type of applications we’re talking about?

Yükselten: We are mostly working on customer relationship management (CRM) applications. We deploy more than 200 applications per year and we have more than six million customers. We have to offer new campaigns and make some transformations for new customers, etc.

We have to save all the informations, and while saving the information, we also interact the other systems, for example the National Identity System, through telecom systems, public switched telephone network (PSTN) systems.

We have to ask informations and we need make some requests to the other systems. So, we need to use all the other systems in our CRM systems. And we also have internet protocol television (IPTV) products, value added services products, and the company products. But basically, we’re using CRM systems for our development and for our systems.

Gardner: So clearly, these are mission-critical applications essential to your business, your growth, and your ability to compete in your market.

Yükselten: If there is a mistake, a big error in our system, the next day, we cannot sell anything. We cannot do anything all over Turkey.

Gardner: Let's talk a bit about the adoption of your SV. Tell me about some of the products you’re using and some of the technologies, and then we’ll get into what this has done for you. But, let's talk about what you actually have in place so far.

Yükselten: Actually, it was very easy to adopt these products into our system, because including proof of concept (PoC), we could use this tool in six weeks. We spent first two weeks for the PoC and after four weeks, we managed to use the tool.

Easy to implement

For the first six weeks, we could use SV for 45 percent of end-to-end test cases. In 10 weeks, 95 percent of our test cases could be run on SV. It was very easy to implement. After that, we also implemented two other SVs in our other systems. So, we're now using three SV systems. One is for development, one is just for the campaigns, and one is for the E2E tests.

Gardner: Tell me how your relationship with HP Software has been. How has it been working with HP Software to attain this so rapidly?

Yükselten: HP Software helped us so much, especially R&D. HP Turkey helped us, because we were also using application lifecycle management (ALM) tools before SV. We were using QTP LoadRunners, QC, etc., so we had a good relation with HP Software.

Since SV is a new tool, we needed a lot of customization for our needs, and HP Software was always with us. They were very quick to answer our questions and to return for our development needs. We managed to use the tool in six weeks, because of HP’s Rapid Solutions.

Gardner: Let’s talk a little bit about the scale here. My understanding is that you have something on the order of 150 services. You use 50 regularly, but you're able to then spin up and use others on a more ad-hoc basis. Why is it important for you to have that kind of flexibility and agility?
We virtualized all the web services, but we use just what we need in our test cases.

Yükselten: As you say, we virtualized more than 150 services, but we use 48 of them actively. We use these portions of the service because we virtualized our third-party infrastructures for our needs. For example, we virtualized all the other CRM systems, but we don’t need all of them. In gateway remote, you can simulate all the other web services totally. So, we virtualized all the web services, but we use just what we need in our test cases.

Gardner: And this must be a major basis for your savings when you only use what you need. The utilization rate goes up, but your costs can go down. Tell us a little bit about how this has been an investment that’s paid back for you.

Yükselten: In three months we got the investment back actually, maybe shorter than three months. It could have been two and half months. For example, for the campaign test cases, we gained 100 percent of efficiency. Before HP, we could run just seven campaigns in a month, but after HP, we managed to run 14 campaigns in a month.

We gained 100 percent efficiency and three man-months in this way, because three test engineers were working on campaigns like this. For another example, last month we got the metrics and we saw that we had a total blockage for seven days, so that was 21 working days for March. We saved 33 percent of our manpower with SV and there are 20 test engineers working on it. We gained 140 man-months last month.

For our basic test scenarios, we could run all test cases in 112 hours. After SV, we managed to run it in 54 hours. So we gained 100 percent efficiency in that area and also managed to do automation for the campaign test cases. We managed to automate 52 percent of our campaign test cases, and this meant a very big efficiency for us. Totally, we saved more than $50,000 per month.

Broader applications

Gardner: That’s very impressive and that was in a relatively short period of time. Do you expect now to be able to take this to a larger set of applications, maybe beyond your organization, more generally across Türk Telekom?

Yükselten: Yes. Türk Telekom licenses these tools and started to use these tools in their test service to get this efficiency for those systems. We have a branch company called AVEA, and they also want to use this tool. After our getting this efficiency, many companies want to use this virtualization. Eight companies visited us in Turkey to get our experiences on this tool. Many companies want this and want to use this tool in their test systems.

Gardner: Do you have any advice for other organizations like those you've been describing, now that you have done this? Any recommendations on what you would advise others that might help them improve on how they do it?

Yükselten: Companies must know their needs first. For example, in our company, we have three blockage systems for third parties and the other systems don't change everyday. So it was easy to implement SV in our systems and virtualize the other systems. We don’t need to do virtualization day by day, because the other systems don't change every day.

Once a month, we consult and change our systems, update our web services on SV, and this is enough for us. But if the other party's systems changes day by day or frequently, it may be difficult to do virtualization every day.
Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

This is an important point. Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

Gardner: As to where you go next, do you have any thoughts about moving towards UFT, using cloud deployment models more? Where can you go more to attain more benefits and efficiencies?

Yükselten: We started to use UFT with integrating SV. As I told you, we managed to automate 52 percent of our campaign test cases so far. So we would like to go on and try to automate more test cases, our end-to-end test cases, the basic scenarios, and other systems.

Our first goal is doing more automation with SV and UFT and the other is using SV in development sites. We plan to find early defects in development sites and getting more quality products into the test.

Rapid deployment

Of course, in this way, we get rapid deployment and we make shorter release times because the product will have more quality. Using performance test and SV also helps us on performance. We use HP LoadRunner for our performance test cases. We have three goals now, and the last one is using SV with integrating LoadRunner.

Gardner: Well, it's really impressive. It sounds as if you put in place the technologies that will allow you to move very rapidly, to even a larger payback. So congratulations on that.

Well, Hasan, I'm afraid we’ll have to leave it there; we've run out of time. We’ve learned how TTNET the largest internet service provider in Turkey has significantly improved on mission-critical application deployment, while also cutting costs and reducing that important time to delivery.
We plan to find early defects in development sites and getting more quality products into the test.

I like to thank first our supporter for this series, HP Software, and remind our audience to carry on the dialogue on the Discover Performance Group on LinkedIn. Of course, I'd like to extend a huge thank you to our special guest Hasan Yükselten. He is the Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom in Istanbul. Thanks so much. Hasan.

Yükselten: You're welcome, and thank you for your time too.

Gardner: And you can gain more insights and information on the best of IT Performance Management at www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover performance podcast series on iTunes under BriefingsDirect.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, and I've been your host and moderator for this discussion part of our ongoing series on IT Innovation. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes. Download the transcript. Sponsor: HP.

Transcript of a BriefingsDirect podcast on how Türk Telekom subsidiary TTNET has leveraged Service Virtualization to significantly improve productivity. Copyright Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in: