Wednesday, January 04, 2012

Overlapping Criminal and State Threats Pose Growing Cyber Security Threat to Global Internet Commerce, Says Open Group Conference Speaker

Transcript of a podcast in conjunction with The Open Group Conference in San Francisco on how foreign governments and criminal gangs are colluding to attack governments and businesses for profit and power.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Register for The Open Group Conference
Jan. 30 - Feb. 3 in San Francisco.

Dana Gardner: Hello, and welcome to a special BriefingsDirect thought leadership interview series coming to you in conjunction with the upcoming The Open Group Conference this January in San Francisco. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host throughout these discussions.

The conference will focus on how IT and enterprise architecture support enterprise transformation. Speakers in conference events will also explore the latest in service oriented architecture (SOA), cloud computing, and security.

We’re here now with one of the main speakers, Joseph Menn, Cyber Security Correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning's Napster.

As a lead-in to his Open Group presentation, entitled "What You're Up Against: Mobsters, Nation-States, and Blurry Lines," Joe and I are now going to explore the current cyber-crime landscape, the underground cyber-gang movement, and the motive behind governments collaborating with organized crime in cyber space. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Gardner: It seems to me that there has been conventional wisdom about cyber crime and security that if there wasn’t much profit then there was self-regulation in place, and the cost of cyber crime would outweigh the payoffs, and it stayed manageable.

Has that changed? Have we entered a new period where just balancing risks and costs isn't a sufficient bulwark against burgeoning crime and risk?

Menn: I'm not sure that that was ever true, not after cyber crime metastasized beginning in 2003, when the bad-guy spammers in Russia wanted more IP addresses to send mail from after the blacklisting got effective. But, it's increasingly less true than it ever was.

Maybe you can make your enterprise a little trickier to get into than the other guy’s enterprise, but crime pays very, very well, and in the big picture, their ecosystem is better than ours. They do capitalism better than we do. They specialize to a great extent. They reinvest in R&D.

On our end, on the good guys’ side, it's hard if you're a chief information security officer (CISO) or a chief security officer (CSO) to convince the top brass to pay more. You don’t really know what's working and what isn't. You don’t know if you've really been had by something that we call advanced persistent threat (APT). Even the top security minds in the country can't be sure whether they’ve been had or not. So it's hard to know what to spend on.

More efficient

T
he other side doesn’t have that problem. They’re getting more efficient in the same way that they used to lead technical innovation. They're leading economic innovation. The freemium model is best evidenced by crimeware kits like ZeuS, where you can get versions that are pretty effective and will help you steal a bunch of money for free. Then if you like that, you have the add-on to pay extra for -- the latest and greatest that are sure to get through the antivirus systems.

Gardner: When you say "they," who you are really talking about?

Menn: They, the bad guys? It's largely Eastern European organized crime. In some countries, they can be caught. In other countries they can't be caught, and there really isn't any point in trying.

It's a geopolitical issue, which is something that is not widely understood, because in general, officials don’t talk about it. Working on my book, and in reporting for the newspapers, I've met really good cyber investigators for the Secret Service and the FBI, but I’ve yet to meet one that thinks he's going to get promoted for calling a press conference and announcing that they can’t catch anyone.

So the State Department, meanwhile, keeps hoping that the other side is going to turn a new leaf, but they’ve been hoping that for 10 or more years, and it hasn’t happened. So it's incumbent upon the rest of us to call a spade a spade here.

What's really going on is that Russian intelligence and, depending on who is in office at a given time, Ukrainian authorities, are knowingly protecting some of the worst and most effective cyber criminals on the planet.

Gardner: And what would be their motivation? In heaven’s name, why would a sovereign power or an agency therein want to protect cyber criminals?

The same resources, human and technical, that are used to rob us blind are also being used in what is fairly called cyber war.



Menn: As a starting point, the level of garden-variety corruption over there is absolutely mind-blowing. More than 50 percent of Russian citizens responding to the survey say that they had paid a bribe to somebody in the past 12 months. But it's gone well beyond that.

The same resources, human and technical, that are used to rob us blind are also being used in what is fairly called cyber war. The same criminal networks that are after our bank accounts were, for example, used in denial-of-service (DOS) attacks on Georgia and Estonian websites belonging to government, major media, and Estonia banks.

It's the same guy, and it's a "look-the-other-way" thing. You can do whatever crime you want, and when we call upon you to serve Mother Russia, you will do so. And that has accelerated. Just in the past couple of weeks, with the disputed elections in Russia, you've seen mass DOS attacks against opposition websites, mainstream media websites, and live journals. It's a pretty handy tool to have at your disposal. I provide all the evidence that would be needed to convince the reasonable people in my book.

Gardner: In your book you use the terms "bringing down the Internet." I suppose another conventional thought around security is that there is a sort of mutual assured destruction effect where bringing down the Internet would hurt everyone. Is that not the case? Are they really just looking for people’s credit card numbers and petty crime, or is this really a threat to the integrity of the Internet in general?

Menn: Well integrity is the keyword there. No, I don’t think anybody is about to stop us all from the privilege of watching skateboarding dogs on YouTube. What I mean by that is the higher trust in the Internet in the way it's come to be used, not the way it was designed, but the way it is used now for online banking, ecommerce, and for increasingly storing corporate -- and heaven help us, government secrets -- in the cloud. That is in very, very great trouble.

Not a prayer

I don’t think that now you can even trust transactions not to be monitored and pilfered. The latest, greatest versions of ZeuS gets past multi-factor authentication and are not detected by any antivirus that’s out there. So consumers don’t have a prayer, in the words of Art Coviello, CEO of RSA, and corporations aren’t doing much better.

So the way the Internet is being used now is in very, very grave trouble and not reliable. That’s what I mean by it. If they turned all the botnets in the world on a given target, that target is gone. For multiple root servers and DNS, they could do some serious damage. I don’t know if they could stop the whole thing, but you're right, they don’t want to kill the golden goose. I don’t see a motivation for that.

Gardner: I guess if we look at organized crime in historical context, we found that there is a lot of innovation over the decades, over the generations, about how to shake people down, create rackets, protection scams, and so forth. Is that playing out on the Internet as well? Is there some continuity around what organized crime tends to do in the physical world to what they're now attempting to do in the virtual world?

Menn: Sure. The mob does well in any place where there is a market for something, and there isn’t an effective regulatory framework that sustains it -- prohibition back in the day, prostitution, gambling, and that sort of thing. One of the things that’s interesting about the core narrative in my book is that prostitution doesn’t travel very well. Liquor is pretty well legal in most of the countries, but gambling travels very well.

So the traditional Five Families, Gambino-type mobs gravitated toward Internet gambling, and they run some very large enterprises that are offshore. And if you don't pay off, then yeah, somebody actually shows up and breaks your legs. Old school.

The mob does well in any place where there is a market for something, and there isn’t an effective regulatory framework that sustains it.



The Russian and Ukrainian gangs went to extortion as an early model, and ironically, some of the first websites that they extorted with the threat were the offshore gambling firms. They were cash rich, they had pretty weak infrastructure, and they were wary about going to the FBI. They started by attacking those sites in 2003-04 and then they moved on to more garden-variety companies. Some of them paid off and some said, "This is going to look little awkward in our SEC filings" and they didn’t pay off.

There are some people who say organized crime and the Internet don't really mix and don't know how it happened. I've just told you how it happened in the US. Overseas it's not like the mob had a meeting one day and said, "Bob, I think, this Internet thing shows promise. I want you to open a cyber division for it."

The way things work in Russia is that even legitimate businesses have a local patron mobster that they pay tribute to. It's not so much because he is going to shut them down, but because you want one guy to deal with all the other people that are going to shake you down -- other mobsters and cops who are on the take.

Once the cyber gang got big enough, sooner or later, they also wanted the protection of traditional organized crime, because those people had better connections inside the intelligence agencies and the police force and could get them protection. That's the way it worked. It was sort of an organic alliance, rather than "Let’s develop this promising area."

Gardner: Just as in past eras with the need for protection, these cyber criminals look for a safe haven and perhaps pay off people, whether it's physical or virtual, to protect their environment, and then perhaps there is some added collusion along the way.

Have we moved now beyond this "let's just get safe and payoff some people for protection," or is there a two-way street where these cyber criminals are being contracted by some state agencies. How does this further collusion come about?

Proving their worth

Menn: Exactly. That is what happens. Initially it was garden-variety payoffs and protection. Then, around 2007, with the attack on Estonia, these guys started proving their worth to the Kremlin, and others saw that with the attacks that ran through their system.

This has continued to evolve very rapidly. Now the DOS attacks are routinely used as the tool for political repression all around the world --Vietnam, Iran and everywhere you’ll see critics that are silenced from DOS attacks. In most cases, it's not the spy agencies or whoever themselves, but it's their contract agents. They just go to their friends in the similar gangs and say, "Hey do this." What's interesting is that they are both in this gray area now, both Russia and China, which we haven't talked about as much.

In China, hacking really started out as an expression of patriotism. Some of the biggest attacks, Code Red being one of them, were against targets in countries that were perceived to have slighted China or had run into some sort of territorial flap with China, and, lo and behold, they got hacked.

In the past several years, with this sort of patriotic hacking, the anti-defense establishment hacking in the West that we are reading a lot about finally, those same guys have gone off and decided to enrich themselves as well. There were actually disputes in some of the major Chinese hacking groups. Some people said it was unethical to just go after money, and some of these early groups split over that.

In Russia, it went the other way. It started out with just a bunch of greedy criminals, and then they said, "Hey -- we can do even better and be protected. You have better protection if you do some hacking for the motherland." In China, it's the other way. They started out hacking for the motherland, and then added, "Hey -- we can get rich while serving our country."

It is much, much worse than anybody realizes. The US counterintelligence a few weeks ago finally put out a report saying that Russia and China are deliberately stealing our intellectual property.



So they're both sort of in the same place, and unfortunately it makes it pretty close to impossible for law enforcement in [the U.S.] to do anything about it, because it gets into political protection. What you really need is White House-level dealing with this stuff. If President Obama is going to talk to his opposite numbers about Chinese currency, Russian support of something we don’t like, or oil policy, this has got to be right up there too -- or nothing is going to happen at all.

Gardner: What about the pure capitalism side, stealing intellectual property (IP) and taking over products in markets with the aid of these nefarious means? A lot of companies won't want to share details about this, but how big a deal is this now for enterprises and commercial organizations?

Menn: It is much, much worse than anybody realizes. The U.S. counterintelligence a few weeks ago finally put out a report saying that Russia and China are deliberately stealing our IP, the IP of our companies. That's an open secret. It's been happening for years. You're right. The man in the street doesn’t realize this, because companies aren’t used to fessing up. Therefore, there is little outrage and little pressure for retaliation or diplomatic engagement on these issues.

I'm cautiously optimistic that that is going to change a little bit. This year the Securities and Exchange Commission (SEC) gave very detailed guidance about when you have to disclose when you’ve been hacked. If there is a material impact to your company, you have to disclose it here and there, even if it's unknown.

Register for The Open Group Conference
Jan. 30 - Feb. 3 in San Francisco.

Can't be boilerplate

If it might have, or is reasonably likely to have, a material impact, you have to spell it out. And it can't be boiler plate. It can't just be, "We are an Internet retailer and therefore we are target of hackers and therefore people’s credit cards might get out." No, without divulging what your weaknesses are you have to say, "We have detected hacks in the past and we don’t know but our source code might be gone."

You have to be a little more explicit, and so far, it's basically Google that has really spelled out how badly they got hit. We're going to see a lot more companies say that, and I think that will help wake up Congress and the general public.

Gardner: So the old adage of shining light on this probably is in the best interest of everyone. Is the message then keeping this quiet isn’t necessarily the right way to go?

Menn: Not only is it not the right way to go, but it's safer to come out of the woods and fess up now. The stigma is almost gone. If you really blow the PR like Sony, then you're going to suffer some, but I haven’t heard a lot of people say, "Boy, Google is run by a bunch of stupid idiots. They got hacked by the Chinese."

It's the definition of an asymmetrical fight here. There is no company that's going to stand up against the might of the Chinese military, and nobody is going to fault them for getting nailed. Where we should fault them is for covering it up.

Not only is it not the right way to go, but it's safer to come out of the woods and fess up now. The stigma is almost gone.



I think you should give the American people some credit. They realize that you're not the bad guy, if you get nailed. As I said, nobody thinks that Google has a bunch of stupid engineers. It is somewhere between extremely difficult to impossible to ward off against "zero-days" and the dedicated teams working on social engineering, because the TCP/IP is fundamentally broken and it ain't your fault.

Gardner: Let's say that I'm a leadership individual at a corporation, a Global 500 organization, and I am wondering to what extent this is a risk. Is this something that’s going to be an acceptable cost of doing business? Is this just something I have to deal with when I go to different markets around the world, or is this an existential threat?

We're still seeing record profits by many companies. Google is certainly not hurting. This hasn’t necessarily attacked their bottom line in the same way it attacked their firewall. How serious is this? How serious should it be considered?

Menn: It's an existential threat not only to your company, but to our country and to our way of life. It is that bad. One of the problems is that in the U.S., executives tend to think a quarter or two ahead. If your source code gets stolen, your blueprints get taken, nobody might know that for a few years, and heck, by then you're retired.

With the new SEC guidelines and some national plans in the U.K. and in the U.S., that’s not going to cut it anymore. Executives will be held accountable. This is some pretty drastic stuff. The things that you should be thinking about, if you’re in an IT-based business, include figuring out the absolutely critical crown jewel one, two, or three percent of your stuff, and keeping it off network machines.

Short-term price

Yes, that is a current cost to doing things that might well make you less efficient and that’s a short-term price you have to pay to ensure long-term survival. You have to do that, and there are some creative things that could be done.

For example, say you've got a blueprint for the next widget that is absolutely going to smoke the competition, and it has got to be on a computer that other people can access for some reason. I would make 100 different similar blueprints of the next generation widget, and only a handful of people you trust know which is the right one, and all the others are hooey.

Therefore, if everything gets stolen, they're going to waste a lot of cycles building the wrong widget. That’s the sort of strategic spy-type thinking that I think garden-variety CEOs have got to start engaging it.

Gardner: That’s interesting. So we have to think differently, don’t we?

Menn: Basically, regular companies have to start thinking like banks, and banks have to start thinking like intelligence agencies. Everybody has to level up here.

Gardner: What do the intelligence agencies have to start thinking about?

Menn: The discussions that are going on now obviously include greatly increased monitoring, pushing responsibility for seeing suspicious stuff down to private enterprise, and obviously greater information sharing between private enterprise, and government officials.

But, there's some pretty outlandish stuff that’s getting kicked around, including looking the other way if you, as a company, sniff something out in another country and decide to take retaliatory action on your own.



But, there's some pretty outlandish stuff that’s getting kicked around, including looking the other way if you, as a company, sniff something out in another country and decide to take retaliatory action on your own. There’s some pretty sea-change stuff that’s going on.

Gardner: So that would be playing offense as well as defense?

Menn: In the Defense Authorization Act that just passed, for the first time, Congress officially blesses offensive cyber-warfare, which is something we’ve already been doing, just quietly.

We’re entering some pretty new areas here, and one of the things that’s going on is that the cyber warfare stuff, which is happening, is basically run by intelligence folks, rather by a bunch of lawyers worrying about collateral damage and the like, and there's almost no oversight because intelligence agencies in general get low oversight.

We’re probably also buying a whole bunch of cyber stuff, which is a waste. I mean, they're going to be equivalent of $500 toilet seats, and we’re not going to know about it, because this stuff doesn’t get disclosed.

Gardner: I know that we could go on to this separate subject for hours, but just very briefly how about the area of governance? We know who's in charge when it comes to interstate commerce. We know who is in charge when it comes to managing the monetary system and protecting against counterfeit bills.

Do we really have anyone who is officially in charge of protecting, let's say, in this case, U.S. companies from outside cyber warfare? Is there a defense, legal, or other framework under which the responsibility for protection falls?

It's a mess

Menn: The short answer is it's a mess. The Department of Homeland Security (DHS) is officially in charge of protecting the civilian-owned stuff with the assistance of the Department of Defense (DoD) and the National Security Agency (NSA). The bottom line is that this makes it very tricky, because there's different frameworks involved.

For example, the FBI gets called in to investigate a hack and they discover it's criminal gang X, but that criminal gang may have been motivated to steal defense secrets more than the money. Then, they're supposed to kick it over to the intelligence community, but it's the same people. So we're a lot more handcuffed in all this than our adversaries are.

Gardner: So it's hard to say whose jurisdiction it is, under what circumstances, for how long, and then who gets the ultimate blame if things go right or wrong? I guess criminals would love to see that, right?

Menn: Yup.

Gardner: Okay, we have to wrap up. It's a very fascinating subject obviously. Just quickly looking to the future, we have some major trends. We have an increased movement toward mobility. People using public networks through their mobile carriers increasingly for work and more business-sensitive activities.

We have the drive toward cloud computing. We’ll be putting more of your assets, data, processes, perhaps even IP in a third-party data center, known as a cloud. We’re also seeing the movement toward outsourcing more IT and outsourcing applications in a software-as-a-service (SaaS) field.

The inroads of social networking into the workplace are bad from a security point of view. Perhaps worse is the consumerization of IT, the bring-your-own-device trend, which isn't going to go away.



Are these good, bad, indifferent? How does this set of big shifts in IT impact this whole cyber security issue?

Menn: Well, there are some that are clearly dangerous, and there are some things that are a mixed bag. Certainly, the inroads of social networking into the workplace are bad from a security point of view. Perhaps worse is the consumerization of IT, the bring-your-own-device trend, which isn't going to go away. That’s bad, although there are obviously mitigating things you can do.

The cloud itself is a mixed bag. Certainly, in theory, it could be made more secure than what you have on premise. If you’re turning it over to the very best of the very best, they can do a lot more things than you can in terms of protecting it, particularly if you’re a smaller business.

If you look to the large-scale banks and people with health records and that sort of thing that really have to be ultra-secure, they're not going to do this yet, because the procedures are not really set up to their specs yet. That may likely come in the future. But, cloud security, in my opinion, is not there yet. So that’s a mixed blessing.

Gardner: Before we close out, it sounds as if it's important for companies to educate themselves on what the real threats are, consider what to do if they are a victim, try to figure out who are their friends in government, and in the third-party private security organizations. Anything else that you think is important, Joe, in terms of getting started in moving toward both defense and offense in anticipating that these issues as you say are potentially existential?

Radical steps

Menn: As I said, you need to think strategically about this, and that includes some pretty radical steps. There are those who say there are two types of companies out there -- those that have been hacked and those that don’t know that they’ve been hacked.

Everybody needs to take a look at this stuff beyond their immediate corporate needs and think about where we’re heading as a society. And to the extent that people are already expert in the stuff or can become expert in this stuff, they need to share that knowledge, and that will often mean, saying "Yes, we got hacked" publicly, but it also means educating those around them about the severity of the threat.

One of the reasons I wrote my book, and spent years doing it, is not because I felt that I could tell every senior executive what they needed to do. I wanted to educate a broader audience, because there are some pretty smart people, even in Washington, who have known about this for years and have been unable to do anything about it. We haven't really passed anything that's substantial in terms of legislation.

As a matter of political philosophy, I feel that if enough people on the street realize what's going on, then quite often leaders will get in front of them and at least attempt to do the right thing. Senior executives should be thinking about educating their customers, their peers, the general public, and Washington to make sure that the stuff that passes isn't as bad as it might otherwise be.

If enough people on the street realize what's going on, then quite often leaders will get in front of them and at least attempt to do the right thing.



Gardner: Very good. We have been talking with Joseph Menn, Cyber Security Correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet.

As a lead-up to his Open Group presentation on, "What You're Up Against: Mobsters, Nation-States and Blurry Lines," Joe and I have been exploring the current cyber crime landscape, what can be done to better understand the threat and perhaps begin to work against it.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference from Jan. 30 to Feb. 3 in San Francisco. You'll hear there more from Joe and many other global leaders on the ways that IT and enterprise architecture support enterprise transformation.

So thanks to you Joe Menn for a very fascinating discussion, and I look forward to your presentation in San Francisco. I also encourage our readers and listeners to attend the conference to learn more. Thanks, Joe.

Menn: Thanks very much.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator through these thought leader interviews. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: The Open Group.

Transcript of a podcast in conjunction with The Open Group Conference in San Francisco on how foreign governments and criminal gangs are colluding to attack governments and businesses for profit and power. Copyright Interarbor Solutions, LLC, 2005-2012. All rights reserved.

Register for The Open Group Conference
Jan. 30 - Feb. 3 in San Francisco.

You may also be interested in:

Tuesday, December 20, 2011

SAP Runs VMware to Provision Virtual Machines to Support Complex Training Courses

Transcript of a BriefingsDirect podcast on how SAP uses VMware products to implement a private cloud that smooths out educational apps runtime requirements.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you’re listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how worldwide enterprise applications leader SAP has designed and implemented a private cloud infrastructure to support an internal consulting and training program.

By standardizing on a VMware cloud platform, SAP has been able to slash provisioning times for multiple instances of its flagship application suite, as well as set the stage for wider adoption of cloud models. [Disclosure: VMware is a sponsor of Briefings Direct podcasts.]

Here to tell us about the technical and productivity benefits of private clouds, is Dr. Wolfgang Krips, the Senior Vice President of Global Infrastructure at SAP in Walldorf, Germany. Welcome to BriefingsDirect, Dr. Krips.

Krips: Thank you, Dana.

Gardner: Tell me about this particular use case. You've needed to provision a lot of your enterprise resource planning (ERP) applications and you've got people coming into learn about using them and implementing them. What is it about private cloud that made the most sense for you in this particular instance?

Krips: Expanding a bit on the use case, there is a specific challenge there. In the training business, people book their courses, and we know only on Friday evening who is attending on the course on Monday. So we have only a very short amount of time over the weekend to set up the systems. That was one of the big challenges that we had to solve.

The second challenge is that, at the same time, these systems become more and more mission critical. Customers are saying, "If the system isn't available during the course, I'm not willing to pay." Maybe the customer will rebook the course. Sometimes he doesn’t. That means that if the systems aren't available, we have an immediate revenue impact.

You can imagine that if we have to set up a couple of hundred, or potentially a couple of thousand, systems over the weekend, we need a high degree of automation to do that. In the past, we had homegrown scripts, and there was a lot of copying and stuff like that going on. We were looking into other technologies and opportunities to make life easier for us.

A couple of challenges were that the scripts and the automation that we had before were dependent on the specific hardware that we used, and we can't use the same hardware for each of the courses. We have different hardware platforms and we had to adopt all the scripts to various hardware platforms.

When we virtualized and used virtualization technology, we could make use of linked cloning technology, which allowed us to set up the systems much faster than the original copying that we did.

The second thing was that by introducing the virtualization layer, we became almost hardware independent, and that cut the effort in constructing or doing the specific automation significantly.

Gardner: When you decided that virtualization and private cloud would be the right answer, what did you need to do? What did you need to put in place and how difficult was it?

The important piece

Krips: Luckily, we already had some experience. The big thing in setting up the cloud is not getting, say, vSphere in place and the basic virtualization technology. It's the administration and making it available in self-service or the automation of the provisioning. That is the important piece, as most would have guessed.

We had some experience with the Lifecycle Manager and the Lab Manager before. So we said at that time because we did this last year, we set up a Lab Manager installation and worked with that to realize this kind of private cloud.

Gardner: For our listeners’ benefit, what sort of scale are we talking about here? How many virtual machines (VMs) did you have or do you have running?

Krips: In that specific cloud, typically we have between a couple of hundred and a couple of thousand VMs running. Overall, at SAP we're running more than 20,000 VMs. And, in fact, I have about 25 private cloud installations.

Gardner: What is it about this particular private-cloud installation that ended up being a proof of concept for you. Was this something that offered insights into other instances where clouds made more sense?

This cloud also gave us some hints on where we have to redesign the workloads so that they become more cloud usable.



Krips: One of the reasons ... is the kind of criticality that we have here. As I mentioned, this cloud has to work. If this goes down, it’s not like some kind of irrelevant test system is down -- or test system pool -- and we can take up another one. Potentially a lot of training courses are not happening. With respect to mission criticality, this cloud was essential.

The other thing that was very interesting is that, as I mentioned before, we have to replicate a lot of systems from a golden master image. The technology that one typically uses for that is network fencing. So we started off with courses that used network fencing.

One of the issues that we ran into is that there are a couple of courses where you can’t use network fencing, because the systems need to connect to common back-end systems. This cloud also gave us some hints on where we have to redesign the workloads so that they become more cloud usable. That’s why I think this cloud implementation was very specific and very important for us.

Gardner: Are there specific payoffs? I suppose there are in just the reduced time for provisioning and the ability to then automate and to use that common infrastructure. Any other thoughts about what the payoffs are when you can do a cloud like this?

Krips: The payoffs are that in the past we had only the weekend as a window to set this all up. A couple of things had us scratching our heads. One thing was, the amount of time that we needed with our traditional copying scripts was significant. We used almost the full weekend to set up the courses. There was really very little room if we needed to fix something. Now, with linked cloning, that time was cut significantly.

Pay for itself

The other thing was that the effort of maintaining the automation script was reduced, and I could deploy a significant amount of the resources to work on more innovative parts like redesigning the workloads and thinking about what could be next steps in automation. If you look at it, with all the tools we utilized, the “cloud implementation” will more or less pay for itself.

Gardner: We often hear similar requirements being applied to a test and development environment. Again, bursting is essential, management and automation can be of great benefit, and it’s mission critical. These are developers are making products. So does that make sense to you, and are some of your other clouds involved with the test and development side of the business as well?

Krips: As I mentioned before, we have 25 private-cloud installations, and in fact, most of them are with development. We also have cloud installations in the demo area. So if sales people are providing demos, there are certain landscapes or resource pools where we are instantiating demo systems.

Most of the VMs and the cloud resourcing pools are in the development area, and as you mentioned, there are a couple of things that are important to that. One is, as you said, that there is a burst demand, when people are doing testing, quality assurance, and things like that. Almost more important is that SAP wants to shorten the innovation cycles.

Internally, we've moved internally to an HR development model, where every six weeks development provides potentially a shippable release. It doesn’t mean that the release gets shipped, but we’re running through the whole process of developing something, testing it, and validating it. There is a demonstrable release available every six weeks.

Moving to the private cloud and doing this in self-service, today we can provision development systems within hours.



In the past, with a traditional model, if we were provisioning physical hardware, it took us about 30 days or so to provision a development system. Now, if you think about a development cycle of six weeks and you’re taking about nearly the same amount of time for provisioning the development system, you’ll see that there is a bit of a mismatch.

Moving to the private cloud and doing this in self-service, today we can provision development systems within hours.

Gardner: That’s what I hear from a number of organizations, and it's very impressive. When you had a choice of different suppliers, vendors, and professional services organizations, was there everything that led you specifically to VMware, and how has that worked out?

Krips: I can give you a fairly straightforward answer. At the time we started working with private cloud and private-cloud installations, VMware was the most advanced provider of that technology, and I'd argue that it is still today.

Gardner: How about security and management benefits? It seems to me that security might not be quite the same issue when it comes to the training instances, but it would be with development, having that source code in control, particularly if you’re doing distributed development. Are there aspects of the private-cloud benefits for security management that are attractive for you?

Very reluctant

Krips: Certainly. The whole topic of cloud, in general, and the notion that workloads can run anywhere in the rut, as it would be in a public cloud, it's certainly something where I personally would be very reluctant when it comes to critical development systems and the intellectual property (IP) that’s on there.

From our perspective, we wanted to have the advantages of cloud with respect to flexibility, provisioning speed, but we didn’t want to have more security headaches than we already had. That’s why we said, "Let's get our arms first around a private cloud."

Even today, our cloud strategy is hybrid cloud strategy, where we’re implementing certain workloads in the private clouds, and there would be certain other payloads that we will potentially be willing to put into a public cloud. Still, development systems would be in 99 percent off the cases on the list where we would be saying they go only in the private cloud.

Gardner: Is there something about a standardized approach to your cloud stack that makes that hybrid potential, when you’re ready to do it, when it's the right payload, something that you'll be pursuing? How does the infrastructure affect your decision about moving to hybrid?

Krips: That’s one of our biggest problems that we're having. Clearly, if one had a standard cloud interface like a vCloud interface, and it was the industry norm, that would be extremely helpful. The issue is that, as you can imagine, there are a couple of workloads that we also want to test in some other well known cloud rents. I'm having a bit of a headache over how to connect to multiple clouds.

For us, it's very important that we separate the user data and the desktop from the device.



That topic is still one of the things that we haven’t finally resolved. Because we have to choose. We basically have to unbolt one external cloud after the other, and everything is still an individual integration effort. Now, if a couple of interesting providers had a standardized cloud interface, it would be very nice for me.

Gardner: This is the last subject for today -- and I appreciate your time and input. A lot of folks that I speak to, when they’ve gained some experience with private cloud and hybrid cloud, start to think about other ways that they can exploit it, that will bring them productivity and technical benefits.

And moving more to the mobile tier, looking at the client, and thinking about delivering not only applications as services, or as terminal services, but thinking about delivering the entire desktop experience, more and more of it as a cloud service, seems to be appealing.

Any thoughts about what your experience and benefits with cloud might mean for your future vision around clients?

Krips: Dana, the thing is pretty clear. If you look at the strategy that SAP pursues, mobility is an integral part. We also think that not only that business process mobility is more important, but what we’re also seeing, and I mentioned that before, with the agility and development. So for instance, there are people who are working every couple of months in new teams. For us, it's very important that we separate the user data and the desktop from the device. We’re definitely pushing very strongly into the topic of desktop virtualization (VDI).

SaaS application

T
he big challenge that we’re currently having is that when you’re moving to VDI, you take everything that’s on the user's desktop today, then you make out of that more or less a software-as-a-service (SaaS) application. As you can imagine, if you’re doing that to development, and they are doing some complex development for the user interfaces or stuff like that, this puts certain challenges on the latency that you can have to the data center or the processing power that you need to have in the back-end.

From our side, we’re interested in technologies similar to that view, and where you can check out machines and still run on a VDI client, but leverage the administrative and provisioning advantages that you have through the cloud provisioning for virtual desktops. So it's a pretty interesting challenge.

We understand what kind of benefits we’re getting from the cloud operations, as I said, the center provisioning, application patching, improved license management, there are a lot of things that are very, very important to us and that we want to leverage.

On the other hand, we have to solve the issue that we’re not blowing the business case, because the processing power and the storage that you have at the end point is relatively cheap. If you move that one-to-one to the back end, we would have difficulties with the business case. That’s why we were so interested in VDI technologies that allowed us checking out an offline mode. That would allow us also to take care of all of our mobile users.

Gardner: If the past is any indication, the costs of computing go down. When there is more volume involved, perhaps with moving to VDI, we should see some significant price improvement there as well. So we’ll have to see on that?

There are a lot of things that are very, very important to us and that we want to leverage.



Krips: Yeah. But we’re confident that we can get the business case to work. Particularly for us, the VDI, the benefits, are very much in the kind of centralized provisioning. Just to give you an example, imagine how easy it would be if you’re doing desktop virtualization, to move from Windows 7 to Windows 8. You could basically flip a switch.

Gardner: Wouldn’t that be nice?

Krips: Yup.

Gardner: Thank you so much. We’ve been talking about how worldwide enterprise applications leader SAP has designed and implemented a VMware private cloud infrastructure to support an internal consulting and training program, and how that has led them to even bigger and better concepts around cloud and the business and technical benefits therein.

I'd like to thank our guest. We’ve been here with Dr. Wolfgang Krips, the Senior Vice President of Global Infrastructure at SAP.

Thank you so much, Dr. Krips.

Krips: Thank you, Dana.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how SAP uses VMware products to implement a private cloud that smooths out educational apps runtime requirements. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Friday, December 16, 2011

Stone Bond's Metadata Virtualization and Orchestration Improves Enterprise Data Integration Response Time and ROI

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today we present a sponsored podcast discussion on the need to make sense of the deluge and complexity of the data and information that is swirling in and around modern enterprises. Most large organizations today are able to identify, classify, and exploit only a small portion of the total data and information within their systems and processes.

Perhaps half of those enterprises actually have a strategy for improving on this fact. But business leaders are now recognizing that managing and exploiting information is a core business competency that will increasingly determine their overall success. That means broader solutions to data distress are being called for.

We'll now then look at how metadata-driven data virtualization and improved orchestration can help provide the inclusivity and scale to accomplish far better data management. Such access then leads to improved integration of all information into an approachable resource for actionable business activities.

With us now to help better understand these issues -- and the market for solutions to these problems -- are our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Welcome to BriefingsDirect, Noel.

Noel Yuhanna: Thanks.

Gardner: We're also here with Todd Brinegar, Senior Vice President for Sales and Marketing at Stone Bond Technologies. Welcome, Todd. [Disclosure: Stone Bond is a sponsor of BriefingsDirect podcasts.]

Todd Brinegar: Dana, how are you? Noel, great to hear you, too.

Gardner: Welcome to you both. Let me start with you, Noel. It's been said often, but it’s still hard to overstate, that the size and rate of growth of data and information is just overwhelming the business world. Why should we be concerned about this? It's been going on for a while. Why is it at a critical stage now to change how we're addressing these issues?

Yuhanna: Well, data has been growing significantly over the last few years because of different application deployments, different devices, such as mobile devices, and different environments, such as globalization. These are obviously creating a bigger need for integration.

We have customers who have 55,000 databases, and they plan to double this in the next three to four years. Imagine trying to manage 55,000 databases. It’s a nightmare. In fact, they don’t even know what the count is actually.

Then, they're dealing with unstructured data, which is more than 75 percent of the data. It’s a huge challenge trying to manage this unstructured data. Forget about the intrusions and the hackers trying to break in. You can’t even manage that data.

Then, obviously, we have challenges of heterogeneous data sources, structured, unstructured, semi-structured. Then, we have different database types, and then, data is obviously duplicated quite a lot as well. These are definitely bigger challenges than we've ever seen.

Different data sources

Gardner: We're not just dealing with an increase in data, but we have all these different data sources. We're still dealing with mainframes. We're still adding on new types of data from mobile devices and sensors. It has become overwhelming.

I hear many times people talking about big data, and that big data is one of the top trends in IT. It seems to me that you can’t just deal with big data. You have to deal with the right data. It's about picking and choosing the correct data that will bring value to the process, to the analysis, or whatever it is you're trying to accomplish.

So Noel, again, to you, what’s the difference between big data and right data?

Yuhanna: It’s like GIGO, Garbage In, Garbage Out. A lot of times, organizations that deal with data don’t know what data they're dealing with. They don’t know that it’s valuable data in the organization. The big challenge is how to deal with this data.

The other thing is making business sense of this data. That's a very important point. And right data is important. I know a lot of organizations think, "Well, we have big data, but then we want to just aggregate the data and generate reports." But are these reports valuable? Fifty percent of times they're not, and they've just burned away 1,000 CPU cycles for this big data.

That's where there's a huge opportunity for organizations that are dealing with such big data. First of all, you need to understand what this big data means, and ask are you going to be utilizing it. Throwing something into the big data framework is useless and pointless, unless you know the data.

Throwing something into the big data framework is useless and pointless, unless you know the data.



Gardner: Todd, reacting to what Noel just said about this very impressive problem, it seems that the old approaches, the old architectures, the connectors and the middleware, aren't going to be up to the task. Why do we have to think differently then about a solution set when we face this deluge, and also getting to the right data rather than just all the data regardless of its value?

Brinegar: Noel is 100 percent correct, and it is all about the right data, not just a lot of data. It’s interesting. We have clients that have a multiplicity of databases. Some they don’t even know about or no longer use, but there is relevant data in there.

Dana, when you were talking about the ability to attach to mainframes, all legacy systems, as well as incorporated into today’s environments, that's really a big challenge for a lot of integration solutions and a lot of companies.

So the ability to come in, attach, and get the right data and make that data actionable and make it matter to a company is really key and critical today. And being able to do that with the lowest cost of ownership in the market and the highest time to value equation -- so that the companies aren’t creating a huge amount of tech on top of the tech that they already have to get at this right data -- that’s really the key critical part.

Gardner: Noel, thinking about how to do this differently, I remember it didn’t seem that long ago when the solution to data integration was to create one big, honking database and try to put everything in there. Then that's what you'd use to crunch it and do your queries. That clearly was not going to work then, and it’s certainly not going to work now.

So what’s this notion about orchestrating, metadata, and data virtualization? Why are some of these architectural approaches being sought out, especially when we start thinking about the real-time issues?

Holistic data set

Yuhanna: You have to look at the holistic data set. Today, most organizations or business users want to look at the complete data sets in terms of how to make business decisions. Typically, what they're seeing is that data has always been in silos, in different repositories, and different data segregations. They did try to bring this all together like in a warehouse trying to deliver this value.

But then the volumes of data, the real-time data needs are definitely a big challenge. Warehouses weren't meant to be real-time. They were able to handle data, but not in real time.

So this whole data segregation delivers a yet even better superior framework to deliver real-time data and the right data to consumers, to processes, to applications, whether it’s structured data, semi-structured, unstructured data, all coming together from different sources -- not only on-premise, also off-premise, such as partner's data and marketplace data coming together and providing that framework toward different elements.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer, so that it could be consumed by different applications as a service, and this is all delivered in a real-time manner.

Now, an important point here is that it's not just read-only, but you can also write back through this virtualized layer, so that it can get back at the data.

We talked about this many years ago and called it the information fabric, which is basically data virtualization that delivers this whole segregation of data in that layer.



Definitely, things have changed with this new framework and there are solutions out there that offer this whole framework, not only just accessing data and integrating data, but they also have frameworks, which includes metadata, security, integration, transformation.

Gardner: How about that Todd Brinegar? When we think about a fabric, when we think about trying to access data, regardless, and get it closer to real time, what are the architectural approaches that you think are working better? What are you putting in place yourselves to try to solve this issue?

Brinegar: It's a great lead in from Noel, because this is exactly the fabric and the framework that Enterprise Enabler, Stone Bond’s integration technology, is built on.

What we've done is look at it from a different approach than traditional integration. Instead of taking old technologies and modifying those technologies linearly to effect an integration and bring that data into a staging database and then do a transformation and then massage it, we've looked at it three-dimensionally.

We attach with our AppComms, which are our connectors, to the metadata layer of an application. We don’t agent within the application. We get the at data of the data. We separate that data from multiple sources, unlimited sources, and orchestrate that to a view that a client has. It could be Salesforce.com, SharePoint, a portal, Excel spreadsheets, or anything that they're used to consuming that data in.

Actionable data

Gardner: Just to be clear, Todd, your architecture and solution approach is not only for access for analysis, for business intelligence (BI), for dashboards and insights -- but this is also for real-time running application sets. This is actionable data?

Brinegar: Absolutely. With Enterprise Enabler, we're not only a data-integration tool, we're an applications-integration tool. So we are EAI/ETL. We cover that full spectrum of integration. And as you said, it is the real-time solution, the ability to access and act on that information in real time.

Gardner: We described why this is a problem and why it's getting worse. We've looked at one approach to ameliorating these issues. But I'm interested in what you get if you do this right.

Let's go back to Noel. For some of the companies that you work with at Forrester, that you are familiar with, the enterprises that are looking to really differentiate themselves, when they get a better grasp of their data, when they can make it actionable, when they can pull it together from a variety of sources, old and new, on-premises and off-premises, how impactful is this? What sort of benefits are they able to gain?

Yuhanna: The good thing about data virtualization is that it's not just a single benefit. There are many, many benefits of data virtualization, and there are customers who are doing real-time BI, business with data virtualization. As I mentioned, there are drawbacks and limitations in some of the older approaches, technologies, and architectures we've used for decades.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth.



We want real-time BI, in the sense that you can’t just wait a day for this report to show up. You need this every hour or every minute. So these are important decisions you've got to make for that.

Real-time BI is definitely one of the big drivers for data virtualization, but also having a single version of the truth. As you know, more than 30 percent of data is duplicated in an organization. That’s a very conservative number. Many people don’t know how much data is duplicated.

And you have different duplication of data -- customer data, product data, or internal data. There are many different types of data that is duplicated. Then the data has a quality issue, because you may change customer data in one of the applications that may touch one database, but the other database is not synchronized as such. What you get is inconsistent data, and customers and other business users don’t really value the data actually anymore.

A single version of the truth is a very important deliverable from solutions, which has never been done before, unless you have one single database actually, but most organizations have multiple databases.

Also it's creating this whole dashboard. You want to get data from different sources, be able to present business value to the consumers, to the business users, what have you, and the other cases like enterprise search, you're able to search data very quickly.

Simpler compliance

Imagine if an auditor walks into an organization, they want to look at data for a particular event, or an activity, or a customer, searching across a thousand resources. It could be a nightmare. The compliance initiative through data virtualization becomes a lot simpler.

Then, you're doing things like content-management applications, which need to be delivered in federation and integrate data from many sources to present more valuable information. Also, smart phones and mobile devices want data from different systems so that they all tie together to their consumers, to the business users, effectively.

So data virtualization has quite a strong value proposition and, typically, organizations get the return on investment (ROI) within six months or less with data virtualization.

Gardner: Todd, at Stone Bond, when you look to some of your customers, what are some of the salient paybacks that they're looking for? Is there some low-hanging fruit, for example? It sounds from what Noel said that there are going to be payoffs in areas you might not even have anticipated, but what are the drivers? What are the ones that are making people face the facts when it comes to data virtualization and get going with it?

Brinegar: With Stone Bond and our technology Enterprise Enabler the ability to virtualize, federate, orchestrate, all in real-time is a huge value. The biggest thing is time to value though. How quickly can they get the software configured and operational within their enterprise? That is really the key that is driving a lot of our clients’ actions.

When we do an installation, a client can be up and operational doing their first integration transformations within the first day.



When we do an installation, a client can be up and operational doing their first integration transformations within the first day. That’s a huge time-to-value benefit for that client. Then, they can be fully operational with complex integration in under three weeks. That's really astounding in the marketplace.

I have one client that on one single project calculated $1.5 million cost savings in personnel in the first year. That’s not even taking into account a technology that they may be displacing by putting in Enterprise Enabler. Those are huge components.

Gardner: How about some examples Todd, use cases? I know sometimes you can name companies and sometimes you can't, but if you do have some names that you can share about what the data virtualization value proposition is doing for them, great.

Brinegar: HP is a great example. HP runs Enterprise Enabler in their supply chain for their Enterprise Server Group. That group provides data to all the suppliers within the Enterprise Server Group on an on-time basis.

They are able to build on demand and take care of their financials in the manufacturing of the servers much more efficiently than they ever have. They were experiencing, I believe, a 10-times return on investment within the first year. That’s a huge cost benefit for that organization. It's really kept them a great client of ours.

We do quite a bit of work in the oil business and the oil-field services business, and each one of our clients has experienced a faster ROI and a lower total cost of ownership (TCO).

We just announced recently that most of our clients experienced a 300 percent ROI in the first year that they implemented Enterprise Enabler. CenterPoint Energy is a large client of Stone Bond and they use us for their strategic transformation of how they're handling their data.

How to begin

Gardner: Let’s go back to Noel. When it comes to getting started, because this is such a big problem, many times it’s trying to boil the ocean, because of all the different data types, the legacy involvement. Do you have a sense of where companies that are successful at doing this have begun?

Is there a pattern, is there a methodology that helps them get moving toward some of these returns that Todd is talking about, that data virtualization is getting these assets into the hands of people who can work with them? Any thoughts about where you get started, where you begin your journey?

Yuhanna: One is taking an issue, like an application-specific strategy, and building blocks on that, or maybe just going out and looking at an enterprise-wide strategy. For the enterprise-wide strategy, I know that some of the large organizations in the financial services, retail, and sales force are starting to embark on looking at all of these data in a more holistic manner:

"I've got customer data that is all over the place. I need to make it more consistent. I need to make it more real-time." Those are the things that I'm dealing with, and I think those are going to be seen more in the coming years.

Obviously, you can’t boil the ocean, but I think you want to start with some data which becomes more valuable, and this comes back to the point that you talked about as the right data. Start with the right data and look at those data points that are being shared and consumed by many users, business users, and that’s going to be valuable for the business itself.

I would definitely recommend looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.



The important thing is also that you're building this block on the solution. You can definitely leverage some existing technologies, if you wanted to. I would definitely recommend now looking at newer technologies, because they definitely are faster. They do a lot of caching. They do a lot of faster integration.

As Todd was mentioning, quicker ROI is important. You don’t have to wait for a year trying to integrate data. So I think those are critical for organizations going forward. But you also have to look at security, availability, and performance. All of these are critical, when you're making decisions about what your architecture is going look like.

Gardner: Noel, you do a lot of research at Forrester. Are there any reports, white papers, or studies that you could point to that would help people as they are starting to sort through this to decide where to start, where the right data might be?

Yuhanna: We've actually done extensive research over the last four or five years on this topic. If you look at Information Fabric, this is a reference architecture we've told customers to use when you're building a data virtualization yourself. You can build the data virtualization yourself, but obviously it will take a couple of years to build. It’s a bit complex to build, and I think that's why solutions are better at that.

But Information Fabric reports are there. Also, information as a service is something that we've written about -- best practices, use cases, and also vendor solutions around this topic of discussion. So information as a service is something that customers could look at and gain understanding.

Case studies

We have use cases or case studies that talk about the different types of deployments, whether it’s a real-time BI implementations or doing single version of fraud detection, or any other different types of environments they're doing. So we definitely have case studies as well.

There are case studies, reference architectures, and even product surveys, which talk about all of these technologies and solutions.

Gardner: Todd, how about at Stone Bond? Do you have some white papers or research, reports that you can point to in order to help people sort through this and perhaps get a better sense of where your technologies are relevant and what your value is?

Brinegar: We do. On our website, stonebond.com, we have our CTO's blogs, Pamela Szabó's blog, which have a great perspective of data, big data, and the changing face of data usage and virtualization.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today -- that’s important -- but what will you need tomorrow. What’s the tech that you're going to carry forward, and how much is the TCO going to be as you move forward, and really make that value decision past that one specific project, because you're going to live with the solution for a long time.

I wish everybody would explore the different opportunities and the different technologies that there are for integration and really determine not what you need today . . . but what will you need tomorrow.



Gardner: Very good. We've been listening to a sponsored podcast discussion on the need to make sense of the deluge and the complexity of data and information swirling in and around modern enterprises. We've also looked at how better data access can lead to improved integration of all information into approachable resources for actual business activities and intelligence.

I want to thank our guests, Noel Yuhanna, Principal Analyst at Forrester Research. Thanks so much, Noel.

Yuhanna: Thanks a lot.

Gardner: And also Todd Brinegar, the Senior Vice President of Sales and Marketing at Stone Bond Technologies. Thanks to you too, Todd.

Brinegar: Much appreciated. Thank you very much, Dana. Thank you very much, Noel.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks again for listening, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: Stone Bond Technologies.

Transcript of a BriefingsDirect podcast on how businesses can better manage and exploit their exploding data via new technologies that provide meta-data-based data integration and management. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in:

Wednesday, December 14, 2011

Case Study: How SEGA Europe Uses VMware to Standardize Cloud Environment for Globally Distributed Game Development

Transcript of a BriefingsDirect podcast on how SEGA Europe has moved to a more secure and scalable VMware cloud solution for its worldwide development efforts.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Dana Gardner: Hi, this is Dana Gardner, Principal Analyst at Interarbor Solutions, and you're listening to BriefingsDirect.

Today, we present a sponsored podcast discussion on how a major game developer in Europe is successfully leveraging the hybrid cloud model.

We’ll learn how SEGA Europe is standardizing its cloud infrastructure across its on-premises operations, as well as with a public cloud provider. The result is a managed and orchestrated hybrid environment to test and develop multimedia games, one that dynamically scales productively to the many performance requirements at hand.

We’re joined by a systems architect with SEGA in London to learn more about how the hybrid approach to multiple, complementary cloud instances is meeting SEGA’s critical development requirements in a new way. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Please join me now in welcoming Francis Hart, Systems Architect at SEGA Europe. Welcome to the podcast, Francis.

Francis Hart: Hi.

Gardner: We’re all very familiar with the amazing video games that are being created nowadays. And SEGA of course is particularly well-known for the Sonic the Hedgehog franchise going back a number of years, and I have to tell you, Francis, my son is a big fan of those games.

But I'm curious about how, behind the scenes, these games are made. How they come into being and what are some of the critical requirements that you have from a systems architecture perspective when developing these games?

Hart: We have a lot of development studios across the world. We're working on multiple projects. We need to ensure that we supply them with a highly scalable and reliable solution in order to test, develop, and produce the game and the code in time.

Gardner: And how many developers are you working with there at SEGA Europe?

Hart: We have a number of different development studios. We’re probably looking at thousands of individual developers across the world.

Gardner: For those folks who are not familiar with the process, there is the creation of the code, there is the test and debug, and builds. It's quite complicated. There's a lot going on, many different moving parts. How did you start approaching that from your IT environment, from building the right infrastructure to support that?

Targeting testing

Hart: One of the first areas we targeted very early on was the last process in those steps, the testing, arguably one of the most time-consuming processes within the development cycle. It happens pretty much all the way through as well to ensure that the game itself behaves as it should, it’s tested, and the customer gets the end-user experience they require.

The biggest technical goal that we had for this is being able to move large amounts of data, un-compiled code, from different testing offices around the world to the staff. Historically we had some major issues in securely moving that data around, and this is what we started looking into cloud solutions for this.

Gardner: How did you use to do it? What was the old fashion way?

Hart: For very, very large game builds, and we're talking game builds above 10 gigabytes, it ended up being couriered within the country and then overnight file transfer outside of the country. So, very old school methods.

We needed both to secure that up to make sure we understood where the game builds were, and also to understand exactly which version each of the testing offices was using. So it’s gaining control, but also providing more security.

Gardner: Clearly one of the requirements here is to manage large files rapidly across geographic distances, but with security and management control, governance, and so forth. But as I understand, you're also dealing with this sort of peak-and-trough issue about the infrastructure itself. You need to ramp up a lot of servers to do the build, but then they sit there essentially unproductive between the builds. How did you flatten that out or manage the requirements around the workload support?

We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times.



Hart: Typically, in the early stages of development, there is a fair amount of testing going on, and it tends to be quite small -- the number of staff involved in it and the number of build iterations. Going on, when the game reaches to the end of its product life-cycle, we’re talking multiple game iterations a day and the game size has gotten very large at that point. The number of people involved in the testing to meet the deadlines and get the game shipped on date is into the hundreds and hundreds of staff.

Gardner: How has virtualization and moving your workloads into different locations evolved over the years?

Hart: We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times. Historically, we always had an on-premise VMware platform to do this. Very recently, we’ve been looking at ways to use that resource within a cloud to cut down from some of Capex loading but also remain a little bit more agile with some of the larger titles, especially online games that are coming around.

Gardner: Right. So we’re seeing a lot more of the role-play games (RPG) types of games, games themselves in the cloud. That must influence what you're doing in terms of thinking about your future direction.

Hart: Absolutely. We’ve been looking at things like the hybrid cloud model with VMware as a development platform for our developers. That's really what we're working on now. We've got a number of games in the pipeline that have been developed on the hybrid cloud platform. It gives the developers a platform that is exactly the same and mirrored to what it would eventually be in the online space through ISPs like Colt, which should be hosting the virtual cloud platform.

Gardner: So if the end destination for the runtime, or the operational runtime, for the game is going to be the cloud, it makes sense to live "of, for, and by" the cloud, I suppose. It’s more complementary. It’s always going to be there, right?

Gaining cost benefits

Hart: Yes. And one of the benefits we're seeing in the VMware offering is that regardless of what data center in the world is the standard platform, it also allows us to leverage multiple ISPs, and hopefully gain some cost benefits from that.

Gardner: Francis, tell me a little bit about the pilot project. No one is going to jump up and put their mission-critical activities into a cloud environment, especially a hybrid environment, overnight. So the crawl-walk-run approach seems to be the most prudent way. Tell me a little bit about what your goals were and what you've been able to attain even in a pilot setting?

Hart: Very early on we were in discussions with Colt and also VMware to understand what technology stack they were bringing into the cloud. We started doing a proof of concept with VMware and a professional services company, and together we were able to come over a proof of concept to distribute our game testing code, which previously was a very old-school distribution system. So anything better would improve the process.

There wasn't too much risk to the company. So we saw the opportunity to have a hybrid cloud set up to allow us to have an internal cloud system to distribute the codes to the majority of UK game testers and to leverage high bandwidth between all of our sites.

For the game testing studios around Europe and the world, we could use a hosted version of the same service which was up on the Colt Virtual Cloud Director (VCD) platform to supply this to trusted testing studios.

Doing this allows us to manage it at one location and simply clone the same system to another cloud data center.



Gardner: When you approach this hybrid cloud model, it’s one thing to be able to technically do that, to have the standardization and to have the products in place that will support the workloads and the virtualization continuity, the similar environment. But what about managing that? What about having a view into what’s going on so that you know what aspects of the activity and requirements are being met and where? It must involve quite a bit of management?

Hart: Yes. Also the virtual cloud environment of vCloud Director has a web portal that allows you to manage a lot of this configuration in a central way. We’re also using VMware Cloud Connector, which is a product that allows you to move the apps between different cloud data centers. And doing this allows us to manage it at one location and simply clone the same system to another cloud data center.

In that regard, the configuration very much was in a single place for us in the way that we designed the proof of concept. It actually helped things, and the previous process wasn’t ideal anyway. So it was a dramatic improvement.

Gardner: Well, let’s dig into that a bit. What were some of the metrics of success, even on your pilots? I understand that you’re going to be expanding on that, but are there data points that we can look to whether it’s reduction in cost for servers, operation, security, time to development and test? What were some of the salient paybacks of doing development in this manner?

Hart: One of the immediate benefits was around the design process. It's very obvious that we were tightening up security within our build delivery to the testing studios. Nothing was with a courier on a bike anymore, but within a secured transaction between the two offices.

Risk greatly reduced

Also from a security perspective, we understood exactly what game assets and builds were in each location. So it really helped the product development teams to understand what was where and who was using what, and so from a risk point of view it’s greatly reduced.

In terms of stats and the amount of data throughput, it’s pretty large, and we’ve been moving terabytes pretty much weekly nowadays. Now we’re going completely live with the distribution network.

So it’s been a massive success. All of the UK testing studios are using the build delivery system day to day, and for the European ones we’ve got about half the testing studios on board that build delivery system now, and it’s transparent to them.

Gardner: Francis, in moving to a hybrid environment, in practical terms, was there anything that appeared, that crept in, that you weren’t anticipating? Was there something about this that caught you by surprise -- either good or bad?

Hart: Not particularly. VMware was very good at allowing us to understand the technology and that's one of the benefits of working with a professional services reseller. In terms of gotchas, there weren't too many. There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.

There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.



Now, we're also looking at alternating a lot of processes within vCenter Orchestrator and other VMware products. They really gave us a good stepping stone into the VMware catalogue, rather than just vSphere, which we were using previously. That was very handy for us.

Gardner: I’d like to just pause here for a second. Your use of vSphere -- and I believe you’re on 4.1 if my notes are correct -- has gotten you to a fairly high level of virtualization. That must have been an important stepping stone to be able to have the dynamic ability to ramp up and down your environments, your support infrastructure, but also skills. I imagine there must have been a comfort zone with virtualization that you needed to have in order to move into the cloud level, too.

Hart: Absolutely. We already have a fair footprint in Amazon Web Services (AWS), and it was a massive skill jump that we needed to train members of the staff in order to use that environment. With the VMware environment, as you said, we already have a large amount of skill set using vSphere. We have a large team that supports our corporate infrastructure and we've actually got VMware in our co-located public environment as well. So it was very, very assuring that the skills were immediately transferable.

Gardner: Let’s get back to what you’re going to be doing, now that this pilot has been successful. You’ve had some success with meeting your requirements, also getting some benefits that you weren't anticipating and that all important security control and governance aspect. What’s the next step? Where did you go with your initial stepping stone into hybrid cloud? How are you going to get into that run mode now that you've sort of walked and crawled?

Game release

Hart: As I mentioned before, the first part was dealing with the end of the process, and that was the testing and the game release process. Now, we’re going to be working back from that. The next big area that we’re actively involved in is getting our developers to develop online games within the hybrid environment.

So they’re designing the game and the game’s back-end servers to be optimal within the VMware environment. And then, also pushing from staging to live is a very simple process using the Cloud Connector.

Gardner: Well, that sounds a lot like what we know in the business as platform as a service (PaaS) where you are actually accomplishing much, if not all, of the development, test and deploy cycle -- the life-cycle of the applications in the cloud.

Hart: Absolutely. We're restructuring and redesigning the IT systems within SEGA to be more of a development operations team to provide a service to the developers and to the company.

Gardner: Great. I really appreciate your sharing your story with us, Francis. Now that you've done this a bit, any words of wisdom, 20/20 hindsight, that you might share with others who are considering moving more aggressively into private cloud, hybrid cloud, and ultimately perhaps the full PaaS value?

The next big area that we’re actively involved in is getting our developers to develop online games within the hybrid environment.



Hart: Just get some hands-on experience and play with the cloud stack from VMware. It’s inexpensive to have a go and just get to know the technology stack.

Gardner: Thanks. You've been listening to a sponsored podcast discussion on how a major game developer, SEGA, is leveraging the hybrid cloud model using the VMware cloud stack.

I’d like to thank our guest, Francis Hart, System Architect at SEGA Europe, based in London. Thanks again so much, Francis.

Hart: Thank you.

Gardner: This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks to our audience for joining us as well, and come back next time.

Listen to the podcast. Find it on iTunes/iPod. Download the transcript. Sponsor: VMware.

Transcript of a BriefingsDirect podcast on how SEGA Europe has moved to a more secure and scalable VMware cloud solution for its worldwide development efforts. Copyright Interarbor Solutions, LLC, 2005-2011. All rights reserved.

You may also be interested in: