A discussion on how the rising tidal wave of data must be better managed, and how new tools are emerging to bring artificial intelligence to the rescue.
Listen
to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello,
and welcome to the next edition of the BriefingsDirect
Voice of the Innovator podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and
moderator for this ongoing discussion on the
latest in IT innovation.
Gardner |
Our next discussion focuses on
why the rising tidal wave of data must be better managed, and how new tools are
emerging to bring artificial
intelligence (AI) to the rescue. Stay with us now as we learn how the
latest AI innovations improve both data and services management across a cloud
deployment continuum -- and in doing so set up an even more powerful way for
businesses to exploit AI.
To learn how AI will help
conquer complexity to allow for higher abstractions of benefits from across all
sorts of data for better analysis, please join me in welcoming Rebecca Lewington,
Senior Manager of Innovation Marketing at Hewlett Packard
Enterprise (HPE). Welcome to BriefingsDirect, Rebecca.
Rebecca Lewington: Hi, Dana.
It’s very nice to talk to you.
Gardner: We have been talking about massive amounts of data for quite some time. What’s new about data buildup that requires us to look to AI for help?
Lewington: Partly
it is the sheer amount of data. IDC’s
Data Age Study predicts the global data sphere will be 175 zettabytes by 2025, which is
a rather large number. That’s what, 1 and 21 zeros? But we have always been in
an era of exploding data.
Lewington |
Yet, things are different. One,
it’s not just the amount of data; it’s the number of sources the data comes
from. We are adding in things like mobile devices, and we are connecting factories’
operational technologies to information technology (IT). There are more and
more sources.
Also, the time we have to do something
with that data is shrinking to the point where we expect everything to be real-time
or you are going to make a bad decision. An autonomous car, for example, might do
something bad. Or we are going to miss a market or competitive intelligence
opportunity.
So it’s not just the amount of
data -- but what you need to do with it that is challenging.
Gardner: We are
also at a time when Al and machine learning (ML)
technologies have matured. We can begin to turn them toward the data issue to better
exploit the data. What is new and interesting about AI and ML that make them
more applicable for this data complexity issue?
Data gets smarter with AI
Lewington: A lot
of the key algorithms for AI were actually invented long ago in the 1950s, but
at that time, the computers were hopeless relative to what we have today; so it
wasn’t possible to harness them.
For example, you can train a
deep-learning neural net
to recognize pictures of kittens. To do that, you need to run millions of
images to train a working model you can deploy. That’s a huge, computationally
intensive task that only became practical a few years ago. But now that we have
hit that inflection point, things are just taking off.
Gardner: We
can begin to use machines to better manage data that we can then apply to
machines. Does that change the definition of AI?
Lewington: The definition
of AI is tricky. It’s malleable, depending on who you talk to. For some people,
it’s anything that a human can do. To others, it means sophisticated techniques,
like reinforcement learning and deep learning.
Traditional analytics effectively
does at scale what you could do with pencil and paper. You could write the equations
to decide where your data should live, depending on how quickly you need to
access it.
But with AI, it’s like the kittens
example. You know what the answer looks like, it’s trivial for you to look at
the photograph and say, “That is a cat in the picture.” But it’s really, really
difficult to write the equations to do it. But now, it’s become relatively easy
to train a black box model to do that job for you.
Gardner: Now
that we are able to train the black box, how can we apply that in a practical
way to the business problem that we discussed at the outset? What is it about
AI now that helps better manage data? What's changed that gives us better data
because we are using AI?
The
heart of what makes AI work is good data; the right data, in the right place, with the right properties you can use to train a model, which you can then feed new data into to get results that you couldn't get otherwise.
Lewington: It’s
a circular thing. The heart of what makes AI work is good data; the right data,
in the right place, with the right properties you can use to train a model,
which you can then feed new data into to get results that you couldn’t get
otherwise.
Now, there are many ways you
can apply that. You can apply it to the trivial case of the cat we just talked
about. You can apply it to helping a surgeon review many more MRIs, for example,
by allowing him to focus on the few that are borderline, and to do the mundane
stuff for him.
But, one of the other things
you can do with it is use it to manipulate the data itself. So we
are using AI to make the data better -- to make AI better.
Gardner: Not
only is it circular, and potentially highly reinforcing, but when we apply this
to operations in IT -- particularly complexity
in hybrid cloud, multicloud, and hybrid IT -- we get an additional benefit.
You can make the IT systems more powerful when it comes to the application of
that circular capability -- of making better AI and better data management.
AI scales data upward and outward
Lewington: Oh, absolutely.
I think the key word here is scale. When you think about data -- and all of the
places it can be, all the formats it can be in -- you could do it yourself. If you
want to do a particular task, you could do what has traditionally been done. You
can say, “Well, I need to import the data from here to here and to spin up
these clusters and install these applications.” Those are all things you could do
manually, and you can do them for one-off things.
But once you get to a certain scale,
you need to do them hundreds of times, thousands of times, even millions of
times. And you don’t have the humans to do it. It’s ridiculous. So AI gives you
a way to augment the humans you do have, to take the mundane stuff away, so they
can get straight to what they want to do, which is coming up with an answer
instead of spending weeks and months preparing to start to work out the answer.
Gardner: So AI
directed at IT, what some people call AIOps could be
an accelerant to this circular advantageous relationship between AI and data? And
is that part of what you are doing within the innovation and research work at HPE?
Lewington: That’s
true, absolutely. The mission of Hewlett Packard
Labs in this space is to assist the rest of the company to create more
powerful, more flexible, more secure, and more efficient computing and data
architectures. And for us in Labs, this tends to be a fairly specific series of
research projects that feed into the bigger picture.
For example, we are now doing the Deep Learning Cookbook, which allows customers to find out ahead of time exactly what kind of hardware and software they are going to need to get to a desired outcome. We are automating the experimenting process, if you will.
And, as we talked about earlier,
there is the shift to the edge. As we make more and more decisions -- and gain
more insights there, to where the data is created -- there is a growing
need to deploy AI at the edge. That means you need a data strategy to get
the data in the right place together with the AI algorithm, at the edge. That’s
because there often isn’t time to move that data into the cloud before making a
decision and waiting for the required action to return.
Once you begin doing that, once
you start moving from a few clouds to thousands and millions of endpoints, how
do you handle multiple deployments? How do you maintain security and data integrity
across all of those devices? As researchers, we aim to answer exactly those
questions.
And, further out, we are
looking to move the natural learning phase itself to the edge, to do the things
we call swarm
learning, where devices learn from their environment and each other,
using a distributed model that doesn’t use a central cloud at all.
Gardner:
Rebecca, given your title is Innovation Marketing Lead, is there something
about the very nature of innovation that you have come to learn personally
that’s different than what you expected? How has innovation itself changed in
the past several years?
Innovation takes time and space
Lewington: I began
my career as a mechanical engineer. For many years, I was offended by the term innovation
process, because that’s not how innovation works. You give people the space and
you give them the time and ideas appear organically. You can’t have a process
to have ideas. You can have a process to put those ideas into reality, to wean
out the ones that aren’t going to succeed, and to promote the ones that work.
Gardner: It
seems to me that the relationship between infrastructure and AI has changed. It
wasn’t that long ago when we thought of business
intelligence (BI) as an application -- above the infrastructure. But the
way you are describing the requirements of management in an edge environment --
of being able to harness complexity across multiple clouds and the edge -- this
is much more of a function of the capability of the infrastructure, too. Is that
how you are seeing it, that only a supplier that’s deep in its infrastructure
roots can solve these problems? This is not a bolt-on benefit.
Lewington: I wouldn’t
say it’s impossible as a bolt-on; it’s impossible to do efficiently and
securely as a bolt-on. One of the problems with AI is we are going to use a
black box; you don’t know how it works. There were a number of news stories recently
about AIs becoming corrupted, biased, and even racist, for example. Those kinds
of problems are going to become more common.
And so you need to know that
your systems maintain their integrity and are not able to be breached by bad
actors. If you are just working on the very top layers of the software, it’s
going to be very difficult to attest that what’s underneath has its integrity
unviolated.
If you are someone like HPE, which
has its fingers in lots of pies, either directly or through our partners, it’s
easier to make a more efficient solution.
You need to know that your systems maintain their integrity and are not able to be breached by bad actors. If you are just working on the very top layers of the software, it's going to be very difficult to attest that what's underneath has its integrity unviolated.
Gardner: Is it
fair to say that AI should be a new core competency, for not only data
scientists and IT operators, but pretty much anybody in business? It seems to
me this is an essential core competency across the board.
Lewington: I
think that's true. Think of AI as another layer of tools that, as we go forward,
becomes increasingly sophisticated. We will add more and more tools to our AI toolbox.
And this is one set of tools that you just cannot afford not to have.
Gardner:
Rebecca, it seems to me that there is virtually nothing within an enterprise
that won't be impacted in one way or another by AI.
Lewington: I
think that’s true. Anywhere in our lives where there is an equation, there could
be AI. There is so much data coming from so many sources. Many things are now
overwhelmed by the amount of data, even if it’s just as mundane as deciding
what to read in the morning or what route to take to work, let alone how to
manage my enterprise IT infrastructure. All things that are rule-based can be
made more powerful, more flexible, and more responsive using AI.
Gardner: Returning
to the circular nature of using AI to make more data available for AI -- and
recognizing that the IT infrastructure is a big part of that -- what are doing
in your research and development to make data services available and secure? Is
there a relationship between things like HPE OneView
and HPE OneSphere
and AI when it comes to efficiency and security at scale?
Let the system deal with IT
Lewington: Those
tools historically have been rules-based. We know that if a storage disk gets
to a certain percentage full, we need to spin up another disk -- those kinds of
things. But to scale flexibly, at some point that rules-based approach becomes
unworkable. You want to have the system look after itself, to identify its own
problems and deal with them.
Including AI techniques in
things like HPE InfoSight,
HPE
Clearpath, and network user identity behavior software on the HPE Aruba
side allows the AI algorithms to make those tools more powerful and more
efficient.
You can think of AI here as
another class of analytics tools. It’s not magic, it’s just a different and better
way of doing IT analytics. The AI lets you harness more difficult datasets,
more complicated datasets, and more distributed datasets.
Gardner: If I’m an IT operator in a global 2000 enterprise, and I’m using analytics to help run my IT systems, what should I be thinking about differently to begin using AI -- rather than just analytics alone -- to do my job better?
Lewington: If
you are that person, you don’t really want to think about the AI. You don’t
want the AI to intrude upon your consciousness. You just want the tools to do
your job.
For example, I may have 1,000 people
starting a factory in Azerbaijan, or somewhere, and I need to provision for all
of that. I want to be able to put on my headset and say, “Hey, computer, set up
all the stuff I need in Azerbaijan.” You don’t want to think about what’s under
the hood. Our job is to make those tools invisible and powerful.
Composable, invisible, and insightful
Gardner: That
sounds a lot like composability. Is that another tangent that HPE is working on
that aligns well with AI?
Lewington: It
would be difficult to have AI be part of the fabric of an enterprise without composability,
and without extending composability into more dimensions. It’s not just about
being able to define the amount of storage and computer networking with a line
of code, it’s about being able to define the amount of memory, where the data
is, where the data should be, and what format the data should be in. All of those
things – from
the edge to cloud – need to be dimensions in composability.
Gardner: We have
been speaking at a fairly abstract level, but let’s look to some examples to
illustrate what we’re getting at when we think about such composability sophistication.
Do you have any concrete
examples or use cases within HPE that illustrate the business practicality of
what we’ve been talking about?
Lewington: Yes,
we have helped a tremendous number of customers either get started with AI in
their operations or move from pilot to volume use. A couple of them stand out.
One particular manufacturing company makes electronic components. They needed
to improve the yields in their production lines, and they didn’t know how to
attack the problem. We were able to partner with them to use such things as vision
systems and photographs from their production tools to identify defects that only
could be picked up by a human if they had a whole lot of humans watching
everything all of the time.
This gets back to the notion
of augmenting human capabilities. Their machines produce terabytes of data
every day, and it just gets turned away. They don’t know what to do with it.
We began running some research
projects with them to use some very sophisticated techniques, visual
autoencoders, that allow you, without having a training set, to characterize a
production line that is performing well versus one that is on the verge of
moving away from the sweet spot. Those techniques can fingerprint a good line
and also identify when the lines go just slightly bad. In that case, a human
looking at line would think it was working just perfectly.
This takes the idea of
predictive maintenance further into what we call prescriptive maintenance,
where we have a much more sophisticated view into what represents a good line
and what represents a bad line. Those are couple of examples for manufacturing
that I think are relevant.
Gardner: If I
am an IT strategist, a Chief Information Officer (CIO) or a Chief Technology
Officer (CTO), for example, and I’m looking at what HPE is doing -- perhaps at the
HPE Discover conference -- where
should I focus my attention if I want to become better at using AI, even if
it’s invisible? How can I become more capable as an organization to enable AI
to become a bigger part of what we do as a company?
The new company man is AI
Lewington: For CIOs,
their most important customers these days may be developers and increasingly
data scientists, who are basically developers working with training models as
opposed to programs and code. They don’t want to have to think about where that
data is coming from and what it’s running on. They just want to be able to
experiment, to put together frameworks that turn data into insights.
It’s very much like the
programming world, where we’ve gradually abstracted things from bare-metal, to
virtual machines, to containers, and now to the emerging paradigm of serverless
in some of the walled-garden public clouds. Now, you want to do the same thing
for that data scientist, in an analogous way.
Today, it’s a lot of heavy lifting, getting these things ready. It’s very difficult for a data scientist to experiment. They know what they want. They ask for it, but it takes weeks and months to set up a system so they can do that one experiment. Then they find it doesn’t work and move on to do something different. And that requires a complete re-spin of what’s under the hood.
Now, using things like
software from the recent HPE BlueData
acquisition, we can make all of that go away. And so the CIO’s job becomes much
simpler because they can provide
their customers the tools they need to get their work done without them calling
up every 10 seconds and saying, “I need a cluster, I need a cluster, I need a
cluster.”
That’s what a CIO should be
looking for, a partner that can help them abstract complexity away, get it done
at scale, and in a way that they can both afford and that takes the risk out.
This is complicated, it’s daunting, and the field is changing so fast.
Gardner: So,
in a nutshell, they need to look to the innovation that organizations like HPE
are doing in order to then promulgate more innovation themselves within their
own organization. It’s an interesting time.
Containers contend for the future
Lewington: Yes,
that’s very well put. Because it’s changing so fast they don’t just want a
partner who has the stuff they need today, even if they don’t necessarily know
what they need today. They want to know that the partner they are working with
is working on what they are going to need five to 10 years down the line -- and
thinking even further out. So I think that’s one of the things that we bring to
the table that others can’t.
Gardner: Can
give us a hint as to what some of those innovations four or five years out
might be? How should we not limit ourselves in our thinking when it comes to
that relationship, that circular relationship between AI, data, and innovation?
Lewington: It
was worth coming to HPE Discover
in June, because we talked about
some exciting new things around many different options. The discussion about
increasing automation abstractions is just going to accelerate.
We are going to get to the point where using containers seems as complicated as bare-metal today and that's really going to help simplify the whole data pipelines thing.
For example, the use of containers,
which have a fairly small penetration rate across enterprises, is at about 10
percent adoption today because they are not the simplest thing in the world. But
we are going to get to the point where using containers seems as complicated as
bare-metal today and that’s really going to help simplify the whole data
pipelines thing.
Beyond that, the elephant in the
room for AI is that model complexity is growing incredibly fast. The compute
requirements are going up, something like 10 times faster than Moore’s Law, even as Moore’s
Law is slowing down.
We are already seeing an AI
compute gap between what we can achieve and what we need to achieve -- and it’s
not just compute, it’s also energy. The world’s energy supply is going up, can
only go up slowly, but if we have exponentially more data, exponentially more
compute, exponentially more energy, and that’s just not going to be
sustainable.
So we are also working on
something called Emergent
Computing, a super-energy-efficient architecture that moves data around
wherever it needs to be -- or not move data around but instead bring the
compute to the data. That will help us close that gap.
Gardner: I’m
afraid we’ll have to leave it there. We have been exploring how the rising
tidal wave of data must be better managed and how new tools are emerging to
bring AI to the rescue. And we’ve heard how new AI approaches and tools create
a virtuous adoption pattern between better data and better analytics, and
therefore better business outcomes.
So please join me in thanking
our guest, Rebecca Lewington, Senior Manager for Innovation Marketing at HPE.
Thank you so much, Rebecca.
Lewington:
Thanks Dana, this was fun.
Gardner: And thank you as well to our audience for joining this BriefingsDirect Voice of the Innovator interview. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored discussions. Thanks again for listening, please pass this along to your IT community, and do come back next time.
Listen
to the podcast. Find it on iTunes. Download the transcript. Sponsor: Hewlett Packard Enterprise.
A discussion on how the rising tidal wave of data must be better
managed, and how new tools are emerging to bring artificial intelligence to the
rescue. Copyright Interarbor Solutions, LLC,
2005-2019. All rights reserved.
You may also be
interested in:
- How automation and intelligence blend with design innovation to enhance the experience of modern IT
- How HCI forms a simple foundation for hybrid cloud, edge, and composable infrastructure
- How Ferrara Candy depends on automated IT intelligence to support rapid business growth
- How real-time data streaming and integration set the stage for AI-driven DataOps
- How the composable approach to IT aligns automation and intelligence to overcome mounting complexity
- How Texmark Chemicals pursues analysis-rich, IoT-pervasive path to the ‘refinery of the future’
- How HPC supports 'continuous integration of new ideas' for optimizing Formula 1 car design
- Want to manage your total cloud costs better? Emphasize the ‘Ops’ in DevOps, says Futurum analyst Daniel Newman
- A new Mastercard global payments model creates a template for an agile, secure, and compliant hybrid cloud
No comments:
Post a Comment