Listen to the podcast. Find it on iTunes. Download the
transcript. Sponsor: Hewlett
Packard Enterprise.
Dana Gardner:
Hello, and welcome to the next edition of the BriefingsDirect Voice
of the Analyst podcast series. I’m Dana Gardner,
Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on
the latest insights into successful digital
transformation.
Gardner |
This IT management strategies
interview explores new ways that businesses can gain the most control and
economic payback from various
cloud computing models. We’ll now hear from an IT industry
analyst on how developers and IT operators can find newfound common ground to make
hybrid cloud the best long-term economic value for
their organizations.
Here to help explore ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations is Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. Welcome, Daniel.
Daniel Newman: Hi, Dana. It’s good to be here.
Here to help explore ways a managed and orchestrated cloud lifecycle culture should be sought across enterprise IT organizations is Daniel Newman, Principal Analyst and Founding Partner at Futurum Research. Welcome, Daniel.
Daniel Newman: Hi, Dana. It’s good to be here.
Gardner: Daniel,
many tools have been delivered over the years for improving software development
in the cloud. Recently, containerization and management
of containers has been a big part of that.
Now, we’re also seeing IT operators tasked with making the most of cloud, hybrid cloud, and multi-cloud around DevOps – and they need better tools, too.
Has there been a divide or lag
between what developers have been able to do in the public cloud environment
and what operators must be able to do? If so, is that gap growing or shrinking now
that new types of tools for automation, orchestration, and composability
of infrastructure and cloud services are arriving?
Out of the shadow, into the cloud
Newman: Your question
lends itself to the concept
of shadow IT. The users of this shadow IT find a way to get
what they need to get things done. They have had a period of uncanny freedom.
Newman |
But this has led to a couple
of things. First of all, generally nobody knows what anybody else is doing
within the organization. The developers have been able to creatively find
tools.
On the other hand, IT has been
cast inside of a box. And they say, “Here is the toolset you get. Here are your
limitations. Here is how we want you to go about things. These are the policies.”
And in the data center world,
that’s how everything gets built. This is the confined set of restrictions that
makes a data center a data center.
But in a
developer’s world, it’s always been about minimum viable
product. It’s been about how to develop using tools that do what they need them
to do and getting the code out as quickly as possible. And when it’s all in the
cloud, the end-user of the application doesn’t know which cloud it’s running on,
they just know they’re getting access to the app.
Basically we now have two
worlds colliding. You have a world of strict, confined policies -- and that’s
the “ops” side of DevOps. You also have the developers who have been given free
rein to do what they need to do; to get what they need to get done, done.
Get Dev and Ops to collaborate
Gardner: So,
we need to keep that creativity and innovation going for the developers so they
can satisfy their requirements. At the same time, we
need to put in guard rails, to make it all sustainable.
Otherwise we see not a minimal
viable cloud – but out-of-control expenses, out-of-control governance and
security, and difficulty taking advantage of both private cloud and public
cloud, or a hybrid affair, when you want to make that choice.
How do we begin to make this a
case of worlds collaborating instead of worlds colliding?
Newman: It’s
a great question. We have tended to point DevOps toward “dev.” It’s really been
about the development, and the “ops” side is secondary. It’s like capital D,
lowercase o.
The thing is, we’re now having
a massive shift that requires more orchestration and coordination between these
groups.
If you have an application
where people buy things on the web and you need to get that app out, it may be
a little more expensive to deploy it without the support of Ops, but you feel
the pressure to get it done quickly.
Now, Ops can come in and say, “Well,
you know … what about a flex consumption-based model, what about multi-cloud, what
about using containers to create more portability?”
“What if we can keep it within
the constraints of a budget and work together with you? And, by the way, we can
help you understand which applications are running on which cloud and provide
you the optimal [aggregate cloud use] plan.”
Let’s be very honest, a developer
doesn’t care about all of that. ... They are typically not paid or compensated in
any way that leads to optimizing on cost. That’s what the Ops people do.
Such orchestration -- just
like almost all larger digital transformation efforts -- starts when you have
shared goals. The problem is, they call it a DevOps group -- but Dev has one
set of goals and Ops has different ones.
What you’re seeing is the need
for new composable
tools for cloud services, which we saw at such events as the
recent Hewlett
Packard Enterprise (HPE) Discover conference. They are launching these
tools, giving the Ops people more control over things, and -- by the way -- giving
developers more visibility than has existed in the past.
There is a big opportunity [for better cloud use economics] through better orchestration and collaboration, but it comes down to the age-old challenges of having the Dev and Ops people share the same goals.
There is a big opportunity [for
better cloud use economics] through better orchestration and collaboration, but
it comes down to the age-old challenges inside of any IT organization -- and that
is having the Dev and the Ops people share the same goals. These new tools may
give them more of a reason to start working in that way.
Gardner: The more
composability the operations people have, the easier it is for them to define a
path that the developers can stay inside of without encumbering the developers.
We may be at the point in the
maturity of the industry where both
sides can get what they want. It’s simply a matter of
putting that together -- the chocolate and peanut-butter, if you will. It
becomes more of a complete DevOps.
But there is another part of
this people often don’t talk about, and that’s the data placement component. When
we examine the lifecycle of a modern application, we’re not just developing it and
staging it where it stays static. It has to be built upon and improved, we are
doing iterations, we are doing Agile methods.
We also have to think about
the data the application is consuming and creating in the same way. That
dynamic data use pattern needs to fit into a larger data management philosophy and
architecture that includes multi-cloud support.
I think it’s becoming DevDataOps
-- not just DevOps these days. The operations people need to be able to put in
requirements about how that data is managed within the confines of that
application’s deployment, yet kept secure, and in compliance with regulations
and localization requirements.
DevDataOps emerges
Newman: We’ve
launched the DevDataOps category right now! That’s actually a really great
point, because if you think about where does all that live -- meaning IT
orchestration of the infrastructure choices and whether that’s in the cloud or
on-premises – there has to be enough of the right kind of storage.
Developers are usually worried
about data from the sense of what can they do with that data to improve and
enhance the applications. When you add in elements like machine learning (ML) and
artificial intelligence (AI), that’s going to just up the compute and storage
requirements. You have the edge and Internet of Things (IoT) to consider now
too for data. Most applications are collecting more data in real-time. With all
of these complexities, you have to ask, “Who
really owns this data?”
Well, the IT part of DevOps,
the “Ops,” typically worries about capacity and resources performance for data.
But are they really worried about the data in these new models? It brings in that
needed third category because the Dev person doesn’t necessarily deal with the
data lifecycle. The need to best use that data is a business unit imperative, a
marketing-level issue, a sales-level data requirement. It can include all the
data that’s created inside of a cloud instance of SAP or Salesforce.
You can’t just leave that to
developers and operators of IT. It becomes core to business unit leadership, and
they need to have an impact. The business leadership should be asking, “We have
all this data. What are we doing with it? How are we managing it? Where does it
live? How do we pour it between different clouds? What stays on-premises and what
goes off? How do we govern it? How can we have governance over privacy and compliance?”
I would say most companies
really struggle to keep up with compliance because there are so many rules
about what kind of data you have, where it can live, how it should be managed, and
how long it should be stored.
I think you bring up a great point, Dana. I could probably rattle on about this for a long, long time. You’ve just added a whole new element to DevOps, right here on this podcast. I don’t know that it has to do with specifically Dev or Ops, but I think it’s Dev+Ops+Data -- a new leadership element for meaningful digital transformation.
Gardner: We
talked about trying to bridge
the gap between development and Ops, but I think there are other
gaps, too. One is between data lifecycle management – for backup and recovery
and making it the lowest cost storage environment, for example. Then there is
the other group of data scientists who are warehousing that data, caching it,
and grabbing more data from outside, third-party sources to do more analytics
for the entire company. But these data strategies are too often still divorced.
These data science people and
what the developers and operators are doing aren’t necessarily in sync. So, we
might have another category, which would be Dev+Data+DataScience+Ops.
Add Data Analytics to the Composition
Newman: Now
we’re going four groups. You are firstly talking about the data from the
running applications. That’s managed through pure orchestration in DevOps, and
that works fine through composability tools. Those tools provide IT the capability
to add guard rails to the developers, so they are not doing things in the
shadows, but instead do things in coordination.
The other data category is that
bigger analytical data. It includes open data, third-party data, and historical
data that’s been collected and stored inside of instances of Enterprise
resource planning (ERP) apps and Customer-relationship
management (CRM) apps for 20 or 30 years. It’s a gold mine of
information. Now we have to figure out an extract process and incorporate that
data into almost every enterprise-level application that developers are building.
Right now Dev and Ops don’t really have a clue what is out there and available
across that category because that’s being managed somewhere else, through an
analytics group of the company.
Gardner: Or,
developers will have to create an entirely different class of applications for
analytics alone, as well as integrating the analytics services into all of the
existing apps.
Newman: One
of the HPE partners I’ve worked with the in the past, SAS, and companies
such as SAS and SAP, are going to become much closer aligned with
infrastructure. Your DevOps is going to become your analytics Ops, too.
There is an opportunity for a
new level of coordination to take place at the vendor level, because when you
see these alliances, and you see these partnerships, this isn’t new. But,
seeing it done in a way that’s about getting the maximum amount of usable data
from one system into every application -- that’s futuristic, and it needs to be
worked on today.
Gardner: The
bottom line is that there are many
moving parts of IT that remain disjointed. But we are at the
point now with composability and automation of getting an uber-view over
services and processes to start making these new connections – technically, culturally,
and organizationally.
What I have seen from HPE
around the HPE Composable
Cloud vision moves a big step in that direction. It might be
geared toward operators, but, ultimately it’s geared toward the entire
enterprise, and gives the business an ability to coordinate, manage, and gain insights
into all these different facets of a digital business.
Companies right now still struggle with the resources to run multi-cloud. They tend to have maybe one public cloud and their on-premises operations. They don't know which is the best cloud approach because they are not getting the total information.
Newman: We’ve
been talking about where things can go, and it’s exciting. But let’s take a
step back.
Multi-cloud
is a really great concept. Hyper-converged
infrastructure, it’s all really nice, and there has been
massive movement in this area in the last couple of years. Companies right now
still struggle
with the resources to run multi-cloud.
They tend to have maybe one public cloud and their on-premise operations. They
have their own expertise, and they have endless contracts and partnerships.
They don’t know which the best-cloud
approach is because they are not necessarily getting that total information. It
depends on all of the relationships, the disparate resources they have across
Dev and Ops, and the data can change on a week-to-week basis. One cloud may
have been perfect a month ago, yet all of a sudden you change the way an
application is running and consuming data, and it’s now in a different cloud.
What HPE is doing with HPE Composable
Cloud takes the cloud plus composable infrastructure and, working through HPE
OneSphere and HPE
OneView, brings them all into a single view. We’re in a
software and user experience world.
The tools that deliver the
most usable and valuable dashboard-type of cloud use data in one spot are going
to win the battle. You need that view in front of you for quick deployment,
with quick builds, portability, and container management. HPE is setting itself
in a good position for how we do this in one place.
Gardner: I’m
afraid we’ll have to leave it there. We have been exploring new ways that
businesses can gain the most control and economic payback from various cloud
computing models. And we’ve learned how developers and IT operators can find
newfound common ground to make hybrid cloud the best long-term economic value
for their organizations.
Please join me in thanking out
guest, Daniel Newman, Principal Analyst and Founding Partner at Futurum
Research. Thank you, sir.
Newman: Great
to be with you, Dana.
Gardner: And a big thank you as well to our audience for joining this BriefingsDirect Voice of the Analyst IT management strategies interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions.
Thanks again for listening.
Please pass this along to your IT community and do come back next time.
Listen to the podcast. Find it on iTunes. Download the
transcript. Sponsor: Hewlett
Packard Enterprise.
Transcript
of a discussion on how
developers and IT operators can find newfound common ground around making
hybrid cloud the best long-term economic value for their organizations. Copyright
Interarbor Solutions, LLC, 2005-2019. All rights reserved.
You may also be
interested in:
- A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud
- Where the rubber meets the road: How users see the IT4IT standard building competitive business advantage
- IT kit sustainability: A business advantage and balm for the planet
- Industrial-strength wearables combine with collaboration cloud to bring anywhere expertise to intelligent-edge work
- How the data science profession is growing in value and impact across the business world
- Why enterprises should approach procurement of hybrid IT in entirely new ways
- Manufacturer gains advantage by expanding IoT footprint from many machines to many insights
- Why enterprises struggle with adopting public cloud as a culture
- Who, if anyone, is in charge of multi-cloud business optimization?