A discussion on how intelligent data center designs and components are delivering what amounts to data centers-as-a-service to SMBs, enterprises, and public sector agencies.
Dana Gardner: Hello,
and welcome to the next edition of the BriefingsDirect
podcast series. I’m Dana
Gardner, Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on the
latest insights into data center strategies.
Gardner |
There has never been a better
time to build an efficient, protected, powerful, contained, and modular data
center -- yet many enterprises and public sector agencies cling to aging, vulnerable,
and chaotic legacy IT infrastructure.
Stay with us now as we examine
how automation, self-healing, and increasingly intelligent data center designs
and components are delivering what amounts to data centers-as-a-service.
Here to help us learn more
about a modern data center strategy that extends to the computing edge -- and
beyond -- is Steve Lalla,
Executive Vice President of Global Services at Vertiv.
Welcome, Steve.
Steve Lalla: Thank you, Dana.
Gardner:
Steve, when we look at the evolution of data center infrastructure, monitoring,
and management software and services, they have come a long way. What’s driving
the need for change now? What’s making new technology more pressing and needed
than ever?
Lalla |
Lalla: There
are a number of trends taking place. The first is the products we are building and
the capabilities of those products. They are getting smarter. They are getting
more enabled. Moore’s Law
continues. What we are able to do with our individual products is improving as
we progress as an industry.
The other piece that’s very interesting
is it’s not only how the individual products are improving, but how we connect
those products together. The connective tissue of the ecosystem and how those products
increasingly operate as a subsystem is helping us deliver differentiated
capabilities and differentiated performance.
So, data
center infrastructure products are becoming smarter and they are becoming more
interconnected.
Interconnectivity across ecosystems
The second piece that’s incredibly
important is broader network connectivity -- whether it’s wide area
connectivity or local area connectivity. Over time, all of these products need
to be more connected, both inside and outside of the ecosystem. That connectivity
is going to enable new services and new capabilities that don’t exist today. Connectivity
is a second important element.
Third, data is exploding. As
these products get smarter, work more holistically together, and are more
connected, they provide manufacturers and customers more access to data. That data
allows us to move from a break/fix type of environment into a predictive
environment. It’s going to allow us to offer more just-in-time and proactive
service versus reactive and timed-based services.
And when we look at the
ecosystems themselves, we know that over time these centralized data centers --
whether they be enterprise data centers, colocation data centers, or cloud data
centers -- are going to be more edge-based and module-based data centers.
And as that occurs, all the
things we talked about -- smarter products, more connectivity, data and data
enablement -- are going to be more important as those modular data centers
become increasingly populated in a distributed way. To manage them, to service
them, is going to be increasingly difficult and more important.
A lot of the folks who interact with these products and services will face what I call knowledge thinning. The talent is reaching retirement age and there is a high demand for their skills.
And one final cultural piece
is happening. A lot of the folks who interact with these products and services will
face what I call knowledge thinning. The highly trained professionals --
especially on the power side of our ecosystem -- that talent is reaching retirement
age and there is a high demand for their skills. As data center growth continues
to be robust, that knowledge thinning needs to be offset with what I talked
about earlier.
So there are a lot of really
interesting trends under way right now that impact the industry and are things
that we at Vertiv are looking to respond to.
Gardner: Steve,
these things when they come together form, in my thinking, a whole greater than
the sum of the parts. When you put this together -- the intelligence,
efficiency, more automation, the culture of skills -- how does that lead to the
notion of data center-as-a-service?
Lalla: As
with all things, Dana, one size does not fit all. I’m always cautious about generalizing
because our customer base is so diverse. But there is no question that in areas
where customers would like us to be operating their products and their
equipment instead of doing it themselves, data center-as-a-service reduces the
challenges with knowledge thinning and reduces the issue of optimizing products.
We have our eyes on all those products on their behalf.
And so, through the
connectivity of the product data and the data lakes we are building,
we are better at predicting what should be done. Increasingly, our customers
can partner with us to deliver a better performing data center.
Gardner: It
seems quite compelling. Modernizing data centers means a lot of return on
investment (ROI), of doing more with less, and becoming more predictive about understanding
requirements and then fulfilling them.
Why are people still stuck? What
holds organizations back? I know it will vary from site to site, but why the inertia?
Why don’t people run to improve their data centers seeing as they are so
integral to every business?
Adoption takes time
Lalla: Well,
these are big, complex pieces of equipment. They are not the kind of equipment
that every year you decide to change. One of the key factors that affects the rate
at which connectivity, technology, processing capability, and data liberation capability
gets adopted is predicated by the speed at which customers are able to change
out the equipment that they currently have in their data centers.
Now, I think that we, as a
manufacturer, have a responsibility to do what we can to improve those products
over time and make new technology solutions backward compatible. That can be through
updating communication cards, building adjunct solutions like we do with Liebert®
ICOMTM-S and gateways, and figuring out how to take equipment
that is going to be there for 15 or 20 years and make it as productive and as
modern as you can, given that it’s going to be there for so long.
So number one, the duration of
product in the environment is certainly one of the headwinds, if you will.
Another is the concept of connectivity.
And again, different customers have different comfort levels with connectivity inside
and outside of the firewall. Clearly the more connected we can be with the
equipment, the more we can update the equipment and assess its performance. Importantly,
we can assess that performance against a big data lake of other products
operating in an ecosystem. So, I think connectivity, and having the right solutions
to provide for great connectivity, is important.
And there are cultural elements
to our business in that, “Hey, if it works, why change it, right?” If it’s
performing the way you need it to perform and it’s delivering on the power and cooling
needs of the business, why make a change? Again, it’s our responsibility to
work with our customers to help them best understand that when new technology
gets added -- when new cards get added and when new assistants, l call them digital
assistants, get added -- that that technology will have a differential effect
on the business.
So I think there is a bit of
reality that gets in the way of that sometimes.
Gardner: I
suppose it’s imperative for organizations like Vertiv to help organizations
move over that hump to get to the higher-level solutions and overcome the
obstacles because there are significant payoffs. It also sets them up to be
much more able to adapt to the future when it comes to edge computing, which
you mentioned, and also being a data-driven organization.
How is Vertiv differentiating
yourselves in the industry? How does combining services and products amount to
a solution approach that helps organizations modernize.
Three steps that make a difference
Lalla: I
think we have a differentiated perspective on this. When we think about service,
and we think about technology and product, we don’t think about them as separate.
We think about them altogether. My responsibility is to combine those software
and service ecosystems into something more efficient that helps our customers
have more uptime, and it becomes more predictive versus break/fix to just-in-time-types
of services.
We spend quite a bit of time impacting the roadmaps and putting requirements into the product teams so that they have a better understanding of what we can do once data and information are liberated.
And the way we do that is through
three steps. Number one, we have to continue to work closely with our product
teams to ensure early in the product definition cycle which products need to be
interconnected into an as-a-service or a self-service ecosystem.
We spend quite a bit of time
impacting the roadmaps and putting requirements into the product teams so that
they have a better understanding of what, in fact, we can do once data and
information gets liberated. A great strategy always starts with great product,
and that’s core to our solution.
The next step is a clear
understanding that some of our customers want to service equipment themselves.
But many of our customers want us to do that for them, whether it’s physically
servicing equipment or monitoring and managing the equipment remotely, such as with
our LIFETM
management solution.
We are increasingly looking at
that as a continuum. Where does self-service end, and where do delivered services
begin? In the past it’s been relatively different in what we do -- from a
self-service and delivered service perspective. But increasingly, you see those
being blended together because customers want a seamless handover. When they
discover something needs to be done, we at Vertiv can pick up from there and
perform that service.
So the connective tissue
between self-service and Vertiv-delivered service is something that we are
increasing clarity on.
And then finally, we talked
about this earlier, we are being very active at building a data lake that comes
from all the ecosystems I just talked about. We have billions of rows of
normalized data in our data lake to benefit our customers as we speak.
Gardner: Steve,
when you service a data center at that solution-level through an ecosystem of
players, it reminds me of when IT organizations started to manage their personal
computers (PCs) remotely. They didn’t have to be on-site. You could bring the
best minds and the best solutions to bear on a problem regardless of where the
problem was -- and regardless of where the expertise was. Is that what we are
seeing at the data center level?
Self-awareness remotely and in-person
Lalla: Let’s
be super clear, to upgrade the software on an uninterruptible
power supply (UPS) is a lot harder than to upgrade software on a PC. But
the analogy of understanding what must be done in-person and what can be done
remotely is a good one. And you are correct. Over years and years of
improvement in the IT ecosystems, we went from a very much in-person type of
experience, fixing PCs, to one where very much like mobile phones, they are
self-aware and self-healing.
This is why I talked about the
connectivity imperative earlier, because if they are not connected then they
are not aware. And if they are not aware, they don’t know what they need to do.
And so connectivity is a super important trend. It will allow us to do more
things remotely versus always having to do things in-person, which will reduce
the amount of interference we, as a provider of services, have on our customers.
It will allow them to have better uptime, better ongoing performance, and even
over time allow tuning of their equipment.
You could argue the mobile phone and PC are at very late stages of their journey of automation. We are on the very early stages of it, and smarter products, connectivity, and data are all important factors.
We are at the early stages of
that journey. You could argue the mobile phone and the PC guys are at the very
late stages of their journey of automation. We are in the very early stages of
it, but the things we talked around earlier -- smarter products, connectivity, and
data -- all are important factors influencing that.
Gardner: Another
evolution in all of this is that there is more standardization, even at the
data center level. We saw standardization as a necessary step at the server and
storage level -- when things became too chaotic, too complex. We saw
standardization as a result of virtualization as well. Is there a
standardization taking place within the ecosystem and at that infrastructure
foundation of data centers?
Standards and special sauce
Lalla: There
has been a level of standardization in what I call the self-service layer, with
protocols like BACnet, Modbus, and SNMP.
Those at least allow a monitoring system to ingest information and data from a
variety of diverse devices for minimally being able to monitor how that
equipment is performing.
I don’t disagree that there is
an opportunity for even more standardization, because that will make that whole
self-service, delivered-as-a-service ecosystem more efficient. But what we see
in that control plane is really Vertiv’s unique special sauce. We are able to
do things between our products with solutions – like Liebert ICOM-S -- that
allow our thermal products to work better together than if they were operating independently.
You are going to see an
evolution of continued innovation in peer-to-peer networking in the control
plane that probably will not be open and standard. But it will provide advances
in how our products work together. You will see in that self-service, as-a-service,
and delivered-service plane continued support for open standards and protocols
so that we can manage more than just our own equipment. Then our customers can
manage and monitor more of their own equipment.
And this special sauce, which includes
the data lakes and algorithms -- a lot of intellectual property and capital in
building those algorithms and those outcomes -- help customers operate better.
We will probably stay close to the vest in the short term, and then we’ll see
where it goes over time.
Gardner: You earlier
mentioned moving data centers to the edge. We are hearing an awful lot
architecturally about the rationale for not moving the edge data to the cloud
or the data center, but instead moving the computational capabilities right out
to the edge where that data is. The edge is where the data streams in, in
massive quantities, and needs to be analyzed in real-time. That used to be the
domain of the operational
technology (OT) people.
As we think about data centers
moving out to the edge, it seems like there’s a bit of an encroachment or even
a cultural clash between the IT way of doing things and the OT way of doing
things. How does Vertiv fit into that, and how does making data center-as-a-service
help bring the OT and IT together -- to create a whole greater than the sum of
the parts?
OT and IT better together
Lalla: I
think maybe there was a clash. But with modular data centers and things like SmartAisle
and SmartRow
that we do today, they could be fully contained, standalone systems.
Increasingly, we are working with strategic IT partners on understanding how
that ecosystem has to work as a complete solution -- not with power and cooling
separate from IT performance, but how can we take the best of the OT world
power and cooling and the best of the IT world and combine that with things
like alarms and fire suppression. We can build a remote management and
monitoring solution that can be outsourced if you wanted, to consume it as a
service, or in-sourced if you want to do it yourself.
And there’s a lot of work to do in that space. As an industry, we are in the early stages, but I don’t think it’s hard to foresee a modular data center that should operate holistically as opposed to just the sum of its parts.
Gardner: I was
thinking that the OT-IT thing was just an issue at the edge. But it sounds like
you’re also referring to it within the data center itself. So flesh that out a
bit. How do OT and IT together -- managing all the IT systems, components,
complexity, infrastructure, support elements -- work in the intelligent, data
center-as-a-service approach?
Lalla: There
is the data center infrastructure management (DCIM)
approach, which says, “Let’s bring it all together and manage it.” I think
that’s one way of thinking about OT and IT, and certainly Vertiv has solutions
in that space with products like TrellisTM.
But I actually think about it
as: Once the data is liberated, how do we take the best of computing solutions,
data analytics solutions, and stuff that was born in other industries and apply
that to how we think about managing, monitoring, and servicing all of the
equipment in our industrial OT space?
It’s not necessarily that OT
and IT are one thing, but how do we apply the best of all of technology
solutions? Things like security. There is a lot of great stuff that’s emerged
for security. How do we take a security-solutions perspective in the IT space
if we are going to get more connected in the OT space? Well, let’s learn from
what’s going on in IT and see how we can apply it to OT.
Once the data is liberated we can take the best of data analytics solutions born in other industries and apply that to how we think about managing, monitoring, and servicing all of the equipment in the industrial OT space.
Just because DCIM has been
tackled for years doesn’t mean we can’t take more of the best of each world and
see how you can put those together to provide a solution that’s differentiated.
I go back to the Liebert
ICOM-S solution, which uses desktop computing and gateway technology, and
application development running on a high-performance IT piece of gear, connected
to OT gear to get those products that normally would work separately to
actually work more seamlessly together. That provides better performance and efficiency
than if those products operated separately.
Liebert ICOM-S is a great
example of where we have taken the best of the IT world compute technology
connectivity and the best of the OT world power and cooling and built a
solution that makes the interaction differentiated in the marketplace.
Gardner: I’m
glad you raised an example because we have been talking at an abstract level of
solutions. Do you have any other use cases or concrete examples where your
concept for infrastructure data center-as-a-service brings benefits? When the
rubber hits the road, what do you get? Are there some use cases that illustrate
that?
Real LIFE solutions
Lalla: I
don’t have to point much further than our Vertiv
LIFE Services remote monitoring solution. This solution came out a couple
years ago, partly from our Chloride® Group acquisition
many years ago. LIFE Services allows customers to subscribe to have us do the
remote monitoring, remote management, and analytics of what’s happening -- and
whenever possible do the preventative care of their networks.
And so, LIFE is a great
example of a solution with connectivity, with the right data flowing from the
products, and with the right IT gear so our personnel take the workload away
from the customer and allow us to deliver a solution. That’s one example of where
we are delivering as-a-service for our customers.
We are also working with
customers -- and we can’t expose who they are -- to bring their data into our
large data lake so we can help them better predict how various elements of
their ecosystem will perform. This helps them better understand when they need
just-in-time service and maintenance versus break/fix service and maintenance.
These are two different
examples where Vertiv provides services back to our customers. One is running a
network
operations center (NOC) on their behalf. Another uses the data lake that
we’ve assimilated from billions of records to help customers who want to
predict things and use the broad knowledge set to do that.
Gardner: We began
our conversation with all the great things going on in modern data center
infrastructure and solutions to overcome obstacles to get there, but economics
plays a big role, too. It’s always important to be able to go to the top
echelon of your company and say, “Here is the math, here’s why we think doing
data center modernization is worth the investment.”
What is there about creating that
data lake, the intellectual property, and the insights that help with data center
economics? What’s the total cost of ownership (TCO) impact? How do you know
when you’re doing this right, in terms of dollars and cents?
Uptime is money
Lalla: It’s
difficult to generalize too much but let me give you some metrics we care
about. Stuff is going to break, but if we know when it’s going to break -- or
even if it does break -- we can understand exactly what happened. Then we can
have a much higher first-time fix rate. What does that mean? That means
I don’t have to come out twice, I don’t have to take the system out of
commission more than once, and we can have better uptime. So that’s one.
Number two, by getting the
data we can understand what’s going on with the network time-to-repair and how
long it takes us from when we get on-site to when we can fix something.
Certainly it’s better if you do it the first time, and it’s also better if you know
exactly what you need when you’re there to perform the service exactly the way
it needs to be done. Then you can get in and out with minimal disruption.
A third one that’s important --
and one that I think will grow in importance -- is we’re beginning to measure
what we call service avoidance. The way we measure service avoidance is we
call up a customer and say, “Hey, you know, based on all this information,
based on these predictions, based on what we see from your network or your
systems, we think these four things need to be addressed in the next 30 days.
If not, our data tells us that we will be coming out there to fix something
that broken as opposed to fixing it before it breaks.” So service avoidance or
service simplification is another area that we’re looking at.
There are many more -- I mean,
meeting service level agreements (SLAs), uptime, and all of those -- but when
it comes to the tactical benefits of having smarter products, of being more
connected, liberating data, and consuming that data and using it to make better
decisions as a service -- those are the things that customers should expect
differently.
Gardner: And
in order to enjoy those economic benefits through the Vertiv approach and through
data center-as-a-service, does this scale down and up? It certainly makes sense
for the larger data center installations, but what about a small- to medium-sized
business (SMB)? What about a remote office, or a closet and a couple of racks?
Does that make sense, too? Do the economic and the productivity benefits scale
down as well scale up?
Lalla: Actually
when we look at our data, more customers who don’t have all the expertise to
manage and monitor their single-phase or small three-phase or Liebert
CRV [cooling] units, and they don’t have the skill set -- those are the
customers that really appreciate what we can do to help them. It doesn’t mean
that they don’t appreciate it as you go up the stack, because as you go up the
stack what those customers appreciate isn’t the fact that they can do some of
the services themselves. They may be more of a self-service-oriented customer,
but what they increasingly are interested in is how we’re using data in our
data lake to better predict things that they can’t predict by just looking at
their own stuff.
So, the value shifts depending
on where you are in the stack of complexity, maturity, and competency. It also
varies based on hyperscale, colocation, enterprise, small enterprise, and point-of-sale.
There are a number of variables so that’s why it’s difficult to generalize. But
this is why the themes of productivity, smarter products, edge ecosystems, and data
liberation are common across all those segments. How they apply the value
that’s extracted in each segment can be slightly different.
Gardner:
Suffice it to say data center-as-a-service is highly customizable to whatever
organization you are and wherever you are on that value chain.
Lalla: That’s
absolutely right. Not everybody needs everything. Self-service is on one side
and as-a-service is on the other. But it’s not a binary conversation.
Customers who want to do most
of the stuff themselves with technology, they may need only a little
information or help from Vertiv. Customers who want most of their stuff to be
managed by us -- whether it’s storage systems or large systems -- we have the
capability of providing that as well. This is a continuum, not an either-or.
Gardner:
Steve, before we close out, let’s take a look to the future. As you build data
lakes and get more data, machine learning (ML)
and artificial intelligence
(AI) are right around the corner. They allow you to have better prediction capabilities,
do things that you just simply couldn’t have ever done in the past.
So what happens as these
products get smarter, as we are collecting and analyzing that data with more
powerful tools? What do you expect in the next several years when it comes to
the smarter data center-as-a-service?
Circle of knowledge gets smart
Lalla: We
are in the early stages, but it’s a great question, Dana. There are two
outcomes that will benefit all of us. One, that information with the right
algorithms, analysis, and information is going to allow us to build products
that are increasingly smarter.
There is a circle of
knowledge. Products produce information going to the data lake, we run the
right algorithms, look for the right pieces of information, feed that back into
our products, and continually evolve the capability of our products as time
goes on. Those products will break less, need less service, and are more
reliable. We should just expect that, just as you have seen in other industries.
So that’s number one.
Number two, my hope and belief
are that we move from a break-fix mentality or environment of where we wait for
something to show up on a screen as an alarm or an alert. We move from that to
being highly predictive and just-in-time.
As an industry -- and
certainly at Vertiv -- first-time fix, service avoidance, and time for repair
are all going to get much better, which means one simple thing for our
customers. They are going to have more efficient and well-tuned data centers.
They are going to be able to operate with higher rates of uptime. All of those
things are going to result in goodness for them -- and for us.
Gardner: I’m
afraid we’ll have to leave it there. We have been exploring how automation,
self-healing, and increasingly intelligent data center designs are delivering
what amounts to data centers-as-a-service. And we’ve learned how modern data
center strategies will extend to the computing edge and beyond.
So please join me in thanking
our guest, Steve Lalla, Executive Vice-President of Global Services at Vertiv.
Thank you so much, Steve.
Lalla: Thanks, Dana.
Gardner: And a
big thank you as well to our audience for joining us for this sponsored
BriefingsDirect data center strategies interview. I’m Dana Gardner, Principal
Analyst at Interarbor Solutions, your host for this ongoing series of
Vertiv-sponsored discussions.
Thanks again for listening.
Please pass this along to your community and do come back next time.
A discussion on how intelligent data center designs and components
are delivering what amounts to data centers-as-a-service to SMBs, enterprises,
and public sector agencies. Copyright Interarbor Solutions, LLC, 2005-2019. All
rights reserved.
You may also be
interested in:
- Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible
- Data-driven and intelligent healthcare processes improve patient outcomes while making the IT increasingly invisible
- How The Open Group Healthcare Forum and Health Enterprise Reference Architecture cures process and IT ills
- The next line of defense—How new security leverages virtualization to counter sophisticated threats
- Expert Panel Explores the New Reality for Cloud Security and Trusted Mobile Apps Delivery
- How IT innovators turn digital disruption into a business productivity force multiplier
- How the Citrix Technology Professionals Program produces user experience benefits from greater ecosystem collaboration
No comments:
Post a Comment