Transcript
of a BriefingsDirect podcast on new solutions to solve the growing need for more
reliable and less cumbersome data backups, despite increasingly data-intensive
environments.
Dana Gardner: Hi, this is
Dana Gardner, Principal Analyst at
Interarbor Solutions, and you're listening to
BriefingsDirect.
Today, we present a sponsored podcast discussion on enterprise
backup, why it’s broken, and how to fix it. We'll examine some major areas where the backup of enterprise information and
data protection
are fragmented, complex, and inefficient. And then, we'll delve into
some new approaches that help simplify the data-protection process, keep
costs in check, and improve recovery confidence.
Here
to share insights on how data protection became such a mess and how new
techniques are being adopted to gain comprehensive and standard control
over the data lifecycle is
John Maxwell, Vice President of Product Management for Data Protection at
Quest Software, now part of
Dell. [Disclosure:
Quest Software is a sponsor of
BriefingsDirect podcasts.]
Welcome
back to the show, John.
John Maxwell: Hey, Dana. It’s great to be here.
Gardner: We're also here with
George Crump, Founder and Lead Analyst at
Storage Switzerland, an analyst firm focused on the
storage market. Welcome, George.
George Crump: Thanks for having me.
Gardner:
John, let’s start with you. How did we get here? Why has
something seemingly as straightforward as backup become so
fragmented and disorganized?
Maxwell: Dana, I
think it’s a perfect storm, to use an overused cliché. If you look back
20 years ago, we had heterogeneous environments, but they were much
simpler. There were
NetWare and
UNIX, and there was this new thing called
Windows.
Virtualization didn’t even really exist. We backed up data to tape, and a lot of data was in
terabytes, not
petabytes.
Flash forward to 2012, and there’s more heterogeneity than ever. You have stalwart databases like
Microsoft SQL Server and
Oracle, but then you have new apps being built on
MySQL.
You now have virtualization, and, in fact, we're at the point this year
where we're surpassing the 50 percent mark on the number of
servers worldwide that are virtualized.
Now we're even starting to see
people running multiple hypervisors, so it’s not even just one
virtualization platform anymore, either. So the environment has gotten
bigger, much bigger than we ever thought it could or would. We have
numerous customers today that have data measured in petabytes, and we
have a lot more applications to deal with.
And last, but not least, we now have more data that’s deemed
mission critical,
and by mission critical, I mean data that has to be recovered in less
than an hour. Surveys 10 years ago showed that in a typical IT
environment, 10 percent of the data was mission critical. Today, surveys
show that it’s 50 percent and more.
Gardner: George, did John leave anything out? From your perspective, why is it different now?
Crump:
A couple of things. I would dovetail into what he just mentioned about
mission criticality. There are definitely more platforms, and that’s a
challenge, but the expectation of the user is just higher. The term I
use for it is IT is getting "Facebooked."
High expectations
I've had many IT guys say to me, "One of the common responses I get from my users is, 'My
Facebook
account is never down.'" So there is this really high expectation on
availability, returning data, and things of that nature that probably
isn’t really fair, but it’s reality.
One of the
reasons that more data is getting classified as mission critical is just
that the expectation that everything will be around forever is much
higher.
The other thing that we forget sometimes is
that the backup process, especially a network backup, probably unlike
any other, stresses every single component in the infrastructure. You're
pulling data off of a local storage device on a server, it’s going
through that server
CPU and memory, it’s going down a
network card, down a network cable, to a switch, to another card, into some sort of storage device, be it disk or tape.
So
there are 15 things that happen in a backup and all 15 things have to
go flawlessly. If one thing is broken, the backup fails, and, of course,
it’s the IT guy’s fault. It’s just a complex environment, and I don’t
know of another process that pushes on all aspects of the environment in
one fell swoop like backup does.
Gardner: So
the stakes are higher, the expectations are higher, the scale and volume
and heterogeneity are all increased. What does this mean, John, for
those that are tasked with managing this, or trying to get a handle on it
as a process, rather than a technology-by-technology approach, really
looking at this at that life cycle? Has this now gone from being a
technical problem to a management or process problem?
Maxwell: It's both, because there are two issues here. One, you expect today's storage administrator, or
sysadmin, to be a
database administrator (DBA), a VMware administrator, a UNIX sysadmin, and a Windows admin. That’s a lot of responsibility, but that’s the fact.
A
lot of people think that they are going to have as deep level of
knowledge on how to recover a Windows server as they would an Oracle
database. That’s just not the case, and it's the same thing from a
product perspective, from a technology perspective.
Is
there really such thing as a backup product, the Swiss Army knife, that
does the best of everything? Probably not.
Is
there really such thing as a backup product, the Swiss Army knife, that
does the best of everything? Probably not, because being the best of
everything means different things to different accounts. It means one
thing for the
small to medium-size business (SMB), and it could mean something altogether different for the enterprise.
We've
now gotten into a situation where we have the typical IT environment
using multiple backup products that, in most cases, have nothing in
common. They have a lot of hands in the pot trying to manage data
protection and restore data, and it has become a tangled mess.
Gardner:
Before we dive a little bit deeper into some of these major areas, I'd
like to just visit another issue that’s very top of mind for many
organizations, and that’s security, compliance, and business continuity
types of issues, risk mitigation issues. George Crump, how important is
that to consider, when you look at taking more of a comprehensive or a
holistic view of this backup and data-protection issue?
Disclosure laws
Crump:
It's a really critical issue, and there are two ramifications. Probably
the one that strikes fear in the heart of every CEO on the planet is
all the disclosure laws that exist now that say that, when you lose a
customer’s data, you have to let him know. Unfortunately, probably the
only effective way to do that is to let
everybody know.
I'm
sure everybody listening to this podcast has gotten more than one
letter already this year saying their Social Security number has been
exposed, things like that. I can think of three
or four I've already gotten this year.
So there is the
downside of legally having to admit you made a mistake, and then there
is the legal requirements of retaining information in case of a lawsuit.
The traditional thing was that if I got a discovery motion filed
against me, I needed to be able to pull this information back, and that
was one motivator. But the bigger motivator is having to disclose that
we did lose data.
And there's a new one coming in. We're hearing about
big data,
analytics,
and things like that. All of that is based on being able to access old
information in some form, pull it back from something, and be able to
analyze it.
That is leading many, many organizations
to not delete anything. If you don't delete anything, how do you store
it? A disk-only type of solution forever, as an example, is a pretty
expensive solution. I know disk has gotten a lot cheaper, but forever,
that’s a really long time to keep the lights on, so to speak.
We need to step back, take inventory of what we've got,
and choose the right solution to solve the problem at hand, whether
you're an SMB or an enterprise.
Gardner:
Let's look at this a bit more from the problem-solution perspective.
John, you've gotten a little bit into this notion that we have multiple
platforms, we have
operating systems,
hypervisors, application types, even appliances. What's the problem here and how do we start to develop a solution approach to it?
Maxwell:
The problem is we need to step back, take inventory of what we've got,
and choose the right solution to solve the problem at hand, whether
you're an SMB or an enterprise.
But the biggest thing
we have to address is, with the amount and complexity of the data, how
can we make sysadmins, storage administrators, and DBAs productive, and
how can we get them all on the same page? Why do each one of these roles
in IT have to use different products?
George and I
were talking earlier. One of the things that he brought up was that in a
lot of companies, data is getting backed up over and over by the DBA,
the VMware administrator, and the storage administrator, which is really
inefficient. We have to look at a holistic approach, and that may not
be one-size-fits-all. It may be choosing the right solutions, yet
providing a centered means for administration, reporting, monitoring,
etc.
Gardner: George, you've been around for a
while in this business, as have I, and there is a little bit of a déjÃ
vu here, where we're bringing a system-of-record approach to a set of
disparate technologies that were, at one time, best of breed and
necessary, but are increasingly part of a more solution or process
benefit.
So we understand the maturation process, but
is there anything different and specific about backup that makes this
even harder to move from that point solution, best of breed mentality,
into more of a comprehensive process standardization approach?
Demands and requirements
Crump:
It really ties into what John said. Every line of business is going to
have its own demands and requirements. To expect not even a backup
administrator, but an Oracle administrator that’s managing an Oracle
database for a line of business, to understand the nuances of that
business and how they want to keep things is a lot to ask.
To
tie into what John said, when backup is broken, the default survival
mechanism is to throw everything out, buy the latest enterprise
solution, put the stake in the ground, and force everybody to centralize
on that one item. That works to a degree, but in every project we've
been involved with, there are always three or four exceptions. That
means it really didn’t work. You didn't really centralize.
Then
there are covert operations of backups happening, where people are
backing up data and not telling anybody, because they still don't trust
the enterprise application. Eventually, something new comes out. The
most immediate example is virtualization, which spawned the birth of
several different virtualized specific applications. So bringing all
that back in again becomes very difficult.
I agree
with John. What you need to do is give the users the tools they want.
Users are too sophisticated now for you to say, "This is where we are
going to back it up and you've got to live with it." They're just not
going to put up with that anymore. It won't work.
So give them the tools that they want. Centralize the process, but not the actual software. I think that's really the way to go.
Gardner:
So we recognize that one size fits all probably isn’t going to apply
here. We're going to have multiple point solutions. That means
integration at some level or multiple levels. That brings us to our next
major topic. How do we integrate well without compounding the
complexity and the problems set? John?
We’re keenly interested in leveraging those
technologies for the DBAs and sysadmins in ways that make their lives
easier and make sure they are more productive.
Maxwell:
We've been working on this now for almost two years here at Quest, and
now at Dell, and we are
launching in November,
something called
NetVault XA.
“XA” stands for
Extended Architecture. We have a portfolio of very rich
products that span the SMBs and the enterprise, with focus on virtual
backup, heterogeneous backup, instantaneous snapshots and deep
application recovery, and we’re keenly interested in leveraging those
technologies for the DBAs and sysadmins in ways that make their lives
easier and make sure they are more productive.
NetVault
XA solves some really big issues. First of all, it unifies the user
experience across products, and by user, I mean the sysadmin, the DBA,
and the storage administrator, across products. The initial release of
NetVault XA will support both our
vRanger and
NetVault Backup, as well as our
NetVault SmartDisk product, and next year, we'll be adding even more of our products under NetVault XA as well.
So now we've provided a common means of administration. We have one
UI.
You don’t have to learn something different. Everyone can work on the
same product, yet based on your login ID, you will have access to
different things, whether it's data or capabilities, such as restoring
an Oracle or SQL Server database, or restoring a
virtual machine (VM).
That's
a common UI. A lot of vendors right now have a lot of solutions, but
they look like they're from three, four, or five different companies. We
want to provide a singular user experience, but that's just really the
icing on the cake with NetVault XA.
If we go down a
little deeper into NetVault XA, once it’s is installed, learning
alongside vRanger, NetVault, or both, it's going to self identify that
vRanger or NetVault environment, and it's going to allow you to manage
it the way that you have already set about from that ability.
New approach
We're
really delivering a new approach here, one we think is going to be
unique in the industry. That's the ability to logically group data and
applications within lines of business.
You gave an
example earlier of Oracle. Oracle is not an application. Oracle is a
platform for applications, and sometimes applications span databases,
file systems, and multiple servers. You need to be looking at that from a
holistic level, meaning what makes up application A, what makes up
application B, C, D, etc.?
Then, what are the service
levels for those applications? How mission critical are they? Are they
in that 50 percent of data that we've seen from surveys, or are they
data that we restored from a week ago? It wouldn’t matter, but then,
again, it's having one tool that everyone can use. So you now have a
whole different user experience and you're taking up a whole different
approach to data protection.
Gardner: This is
really interesting. I've seen a demo of this and I was very impressed.
One of the things that jumped out at me was the fact that you're not
just throwing a GUI overlay on a variety of products and calling it
integration.
There really seems to be a drilling down
into these technologies and surfacing information to such a degree that
it strikes me as similar to what
IT service management (ITSM)
did for managing IT systems at a higher level. We're now bringing that
to a discrete portion backup and recovery. Does that sound about right,
George, or did I overstate it?
We're
really delivering a new approach here, one we think is going to be
unique in the industry. That's the ability to logically group data and
applications within lines of business.
Crump: No, that's
dead-on. The benefits of that type of architecture are going to be
substantial. Imagine if you are the
vRanger programmer, when all this
started. Instead of having to write half of the backend, you could just
plug into a framework that already existed and then focus most of your
attention on the particular application or environment that you are
going to protect.
You can be releasing the equivalent
of vRanger 6 on vRanger 1, because you wouldn’t have to go write this
backend that already existed. Also, if you think about it, you end up
with a much more reliable software product, because now you're building
on a library class that will have been well tested and proven.
Say you want to implement
deduplication
in a new version of the product or a new product. Instead of having to
rewrite your own deduplication engine, just leverage the engine that's
already there.
Gardner: John, it sounds a little
bit like we're getting the best of both worlds, that is to say the
ability to support a lot of point solutions, allowing the tools that the
particular overseer of that technology wants to use, but bringing this
now into the realm of policy.
It's something you can
apply rules to, that you can bring into concert with other IT management
approaches or tasks, and then gain better visibility into what is
actually going on and then tweak. So amplify for me why this is
standardization, but not at the cost of losing that Swiss Army knife
approach to the right tool for the right problem?
One common means
Maxwell:
First of all, by having one common means, whether you're a DBA, a
sysadmin, a VMware administrator, or a storage administrator, this way
you are all on the same page. You can have people all buying into one
way of doing things, so we don't have this data being backed up two or
three times.
But the other thing that you get, and this
is a big issue now, is protecting multiple sites. When we talk about
multiple sites, people sometimes say, "You mean multiple
data centers. What about all those remote office branch offices?" That right now is a big issue that we see customers running into.
The
beauty of NetVault XA is I can now have various solutions implemented,
whether it's vRanger running remotely or NetVault in a branch office,
and I can be managing it. I can manage all aspects of it to make sure
that those backups are running properly, or make sure replication is
working properly. It could be halfway around the country or halfway
around the world, and this way we have consistency.
Speaking
of reporting, as you said earlier, what about a dashboard for
management? One of our early users of NetVault XA is a large
multinational company with 18 data centers and 250,000 servers. They
have had to dedicate people to write service-level reports for their
backups. Now, with NetVault XA, they can literally give their IT
management, meaning their CIO and their CTOs, login IDs to NetVault XA,
and they can see a dashboard that’s been color coded.
It
can say, "Well, everything is green, so everything is protected,"
whether it's the Linux servers, Oracle databases, Exchange email,
whatever the case. So by being able to reduce that level of complexity
into a single pane of glass -- I know it's a cliché, but it really is --
it's really very powerful for large organizations and small.
I can manage all aspects of it to make sure
that those backups are running properly, or make sure replication is
working properly.
Even
if you have two or three locations and you're only 500 employees,
wouldn’t it be nice to have the ability to look at your backups, your
replicas, and your snapshots, whether they're in the data center or in
branch offices, and whether you're a sysadmin, DBA, storage
administrator, to be using one common interface and one common set of
rules to all basically all get on the same plane?
Gardner: Let's revisit the issue that George was talking about,
eDiscovery,
making sure that nothing falls through the cracks, because with
Murphy’s Law rampant, that's going to be the thing that somebody is
going to do eDiscovery on. It seems to me you're gaining some
confidence, some sense of guarantees, that whatever
service-level agreements (SLAs) and compliance regulatory issues are there, you can start to check these off and gain some automated assurance.
Help me better understand John why the NetVault XA has, for lack of a better word, some sort of a confidence benefit to it?
Maxwell:
Well, the thing is that not only have we built things into NetVault XA,
where it's going to do auto discovery of how you have vRanger and
NetVault set up and other products down the road, but it's going to give
you some visibility into your environment, like how many VMs are out
there? Are all those VMs getting protected?
I was just at
VMworld Barcelona
a couple of weeks ago, and VMware has made it incredibly simple now to
provision VMs and the associated storage. You've got people powering up
and powering down VMs at will. How do you know that you're protecting
them?
Dispersed operations
Also
at an event this week in Europe, I ran into a user in an emerging
country in Eastern Europe, and they have over 1,000 servers, most of
which are not being protected. It's a very dispersed operation, and
people can implement servers here and there, and they don't know what
half the stuff is.
So it's having a means to take an
inventory and ensure that the servers are being maintained, that
everything is being protected, because next to your employees, your data
is the most important asset that you have.
Data is everywhere now. It’s in
mobile devices. It certainly could be in
cloud-based apps. That's one of the things that we didn’t talk about. At Quest we use seven
software-as-a-service (SaaS)-based applications, meaning they're big parts, whether it's
Salesforce.com or our helpdesk systems, or even
Office 365.
This is mission-critical corporate data that doesn’t run in our own
data center. How am I protecting that? Am I even cognizant of it?
The
cloud has made things even more interesting, just as virtualization has
made it more interesting over the past couple of years. With NetVault
XA, we give you that one single pane of glass with which you can report,
analyze, and manage all of your data.
Gardner:
Do we have any instances where we have had users, beta customers
perhaps, putting this to use, and do we have any metrics of success?
What are they getting from it? It's great to have confidence, it's great
to have a single view, but are they reducing expenses? Do they have a
real measurement of how their complexity has been reduced? What are the
tangibles, John?
Now, this person can focus on ensuring that
operating systems are maintained, working with end users.
Maxwell: Well, one of the
tangibles is the example of the customer that has 18 data centers,
because they have a finite-sized group that manage the backups. That
team is not going to grow. So if they have to have two or three people
in that team just working on writing reports, going out and looking
manually at data, and creating their own custom reports, that's not a
good use of their time.
Now, those people can do things
that they should be doing, which is going out and making sure that data
is being protected, going out and testing
disaster recovery (DR) plans, and so forth. Some people were tasked with jobs that aren’t very much fun, and that’s now all been automated.
Now
they can get down to brass tacks, which is ensuring that, for an
enterprise with a quarter million servers, everything is protected and
it's protected the way that people think they are going to be protected,
meaning the service levels they have in place can be met.
We
also have to remember that NetVault XA brings many benefits to our
Ranger customer base as well. We have accounts with maybe one home
office and maybe two or three remote labs or remote sales offices. We've
talked to a couple of vRanger customers who now implement vRanger
remotely. In these shops, there is no storage administrator. It's the
sysadmin, the VMware administrator, or the Windows administrator. So
they didn’t have the luxury like the big accounts to have people do
that.
Now, this person can focus on ensuring that
operating systems are maintained, working with end users. A lot of the
tasks they were previously forced to do took up a lot of their time.
Now, with NetVault XA, they can very quickly look at everything, give
that health check that everything is okay, and control multiple
locations of vRanger from one central console.
Mobile devices
Gardner:
Just to be clear John, this console is something you can view as a web
interface, and I'm assuming therefore also
through mobile devices. I'm
going to guess that at some point, there will perhaps be even a more
native application for some of the prominent mobile platforms.
Maxwell: It’s funny that you mentioned that. This is an
HTML5-based application. So it's very new, very fresh, and very graphical. If you look at the UI, it was designed with
tablets and
laptops in mind. It's gotten to where you can do controls with your thumbs, assuming you're running this on a tablet.
In-house,
and with early support customers, you can log into this remotely via
laptops, or tablet computing. We even have some people using them on
mobile phones, even though we're not quite there yet. I'm talking about
the form factor of how the screens light up, but we will definitely be
going that way. So a sysadmin or storage administrator can have at their
fingertips the status of what’s going on in the data-protection
environment.
What's nice is because this is a thin
client, a web UI, you can define user IDs not only for the sysadmins and
DBAs and storage administrators, but like I said earlier, IT
management.
So if your boss, or your boss’ boss, wants
to dial in and see the health of things, how much data you’re
protecting, how much data is being replicated, what data is being
protected up in the cloud, which is on-prem, all of that sort of stuff,
they can now have a dashboard approach to seeing it all. That’s going to
make everyone more productive, and it's going to give them a better
sense that this data is being protected, and they can sleep at night.
If you don’t have a way to manage and see all of your data
protection assets, it's really just a lot of talk.
Gardner:
George, we spoke earlier about these natural waves of maturation that
have occurred throughout the history of IT. As you look at the landscape
for data protection, backup, or storage, how impactful is this in that
general maturation process? Is Quest, with its NetVault XA, taking a
baby step here, or is this something that gets us a bit more into a
fuller, mature outcome, when it comes to the process of data lifecycle?
Crump:
Actually, it does two things. Number one, from the process perspective,
it allows there to actually be a process. It's nice to talk about
backup process and have a process for protection and a process to
recover, but if you don’t have a way to manage and see all of your data
protection assets, it's really just a lot of talk.
You can't run a process like we are talking about in today’s data center with virtualization and things like that off of an
Excel spreadsheet.
It's just not going to work. It's nowhere near dynamic enough. So
number one, it enables the fact of having a conversation about process.
Number
two, it brings flexibility. Because the only other way you could have
had that conversation about process, as I said before, would be to throw
everything out, pick one application, and suffer the consequences,
which would be not ideal support for every single platform.
To sum it up, it's really an enabler to creating a real data-protection process or workflow.
Gardner:
Okay. We're going to have to wrap it up pretty soon, but we've
mentioned mobile access, and cloud. I wonder if there's anything else
coming down the trend pike, if you will, that will make this even more
important.
The economy
I
come back to our economy. We're still not growing as fast as many
people would like, and therefore companies are not just able to grow
their top line. They have to look to increase their bottom line through
efficiency and deduplication, finding redundancy, cutting down on
storage, cutting down energy cost, simplifying, or centralizing data
centers into a larger but more efficient and therefore fewer facilities,
etc.
Is there anything here, and I will open this up
to both John and George, that we can look to in the future that strikes
some of these issues around efficiency and productivity, or perhaps
there are other trends that will make having a process approach to a
data lifecycle and backup and recovery even more important?
Maxwell:
Dana, you hit on something that's really near and dear to my heart,
which is data deduplication. We have a very broad strategy. We offer our
own software-based dedupe. We support every major hardware based dedupe
appliance out there, and we're now adding support for Dell’s DR Series,
DR4000 dedupe appliances.
But we're still very much committed to tape, and we're building
initiatives based on storing data in the cloud and backing up,
replicating, failover, and so forth.
One of the things
that we built into NetVault XA that's separate from the policy
management and online monitoring is that we now have historical data.
This is going to give you the ability to do some capacity management and
capacity planning and see what the utilization is.
How
much storage are your backups taking? What's the most optimum number of
generations? Where are you keeping that data? Is some data being kept
too long? Is some data not being kept long enough?
For every ounce of flexibility,
it feels like we have added two ounces of complexity, and it's
something we just can't afford to deal with.
By
offering a broad strategy that says we support a plethora of backup
targets, whether it's tape, special-purpose backup appliances,
software-based dedupe, or even the cloud, we're giving customers
flexibility, because they have unique needs and they have different
needs, based on service levels or budgets. We want to make them
flexible, because, going back to our original discussion, one size
doesn’t fit all.
Gardner: I think we can sum
that up as just being more intelligent, being more empowered, and having
the visibility into your data. Anything else, George, that we should
consider as we think about the future, when it comes to these issues on
backup and recovery and data integrity?
Crump:
Just to tie in with what John said, we need flexibility that doesn’t add
complexity. Almost everything we've done so far in the environment up
to now, has added flexibility, but also, for every ounce of flexibility,
it feels like we have added two ounces of complexity, and it's
something we just can't afford to deal with. So that's really the key
thing.
Looking forward, at least on the horizon, I
don't see a big shift, something like virtualization that we need to be
overly concerned with. What I do see is the virtual environment becoming
more and more challenging, as we stack more and more VMs on it. The
amount of I/O and the amount of data protection process that will
surround every host is going to continue to increase. So the time is now
to really get the bull by the horns and institute a process that will
scale with the business long-term.
Gardner:
Well, great. We've been enjoying a conversation, and you have been
listening to a sponsored BriefingsDirect podcast on new approaches that
help simplify the data-protection process and help keep cost in check,
while also improving recovery confidence. We've seen how solving data
protection complexity and availability can greatly help enterprises gain
a comprehensive and standardized control approach to their data and
that data’s lifecycle.
So I would like to thank our
guests, John Maxwell, Vice President of Product Management for Data
Protection at Quest. Thanks, John.
Maxwell: Thank you, Dana.
Gardner: And also George Crump, Lead Analyst at Storage Switzerland. Thank you, George.
Crump: Thanks for having me.
Gardner:
This is Dana Gardner, Principal Analyst at Interarbor Solutions. Thanks
to you, our audience, for listening, and do come back next time.
Transcript
of a BriefingsDirect podcast on new solutions to solve the growing need for more
reliable and less cumbersome data backups, despite increasingly data-intensive
environments. Copyright Interarbor
Solutions, LLC, 2005-2012. All rights reserved.
You may also be interested in: