Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: Hewlett Packard Enterprise.
Dana Gardner:
Hello, and welcome to the next edition of the BriefingsDirect Voice of the
Customer podcast series. I’m Dana Gardner,
Principal Analyst at Interarbor
Solutions, your host and moderator for this ongoing discussion on
digital transformation success stories. Stay with us now to learn how agile
businesses are fending off disruption -- in favor of innovation.
Gardner |
Our next hybrid IT management
success story examines how the nonprofit research institute HudsonAlpha improves how it
harnesses and leverages a spectrum of IT deployment environments. We’ll now
learn how HudsonAlpha has been testing a new Hewlett Packard Enterprise
(HPE) solution, OneSphere, to
gain a common and simplified management interface to rule them all.
Here to help explore the
benefits of improved levels of multi-cloud visibility and process automation is
Katreena
Mullican, Senior Architect and Cloud Whisperer at HudsonAlpha Institute for Biotechnology in
Huntsville, Alabama. Welcome, Katreena.
Katreena Mullican: Thank
you, Dana. Thank you for having me as a part of your podcast.
Gardner: We’re
delighted to have you with us. What’s driving the need to solve hybrid IT
complexity at HudsonAlpha?
Mullican: The
big drivers at HudsonAlpha are the requirements for data locality and
ease-of-adoption. We produce about 6 petabytes of new data every year, and that
rate is increasing with every project that we do.
Mullican |
We support hundreds of researchprograms with data and trend analysis. Our infrastructure requires quickly
iterating to identify the approaches that are both cost-effective and the best
fit for the needs of our users.
Gardner: Do
you find that having multiple types of IT platforms, environments, and
architectures creates a level of complexity that’s increasingly difficult to
manage?
Mullican:
Gaining a competitive edge requires adopting new approaches to hybrid IT. Even
carefully contained shadow IT is a great way to develop new approaches and
attain breakthroughs.
Gardner: You
want to give people enough leash where they can go and roam and experiment, but
perhaps not so much that you don’t know where they are, what they are doing.
Software-defined everything
Mullican:
Right. “Software-defined everything” is our mantra. That’s what we aim to do at
HudsonAlpha for gaining rapid innovation.
Gardner: How
do you gain balance from too hard-to-manage complexity, with a potential of
chaos, to the point where you can harness and optimize -- yet allow for
experimentation, too?
Mullican: IT is
ultimately responsible for the security and the up-time of the infrastructure.
So it’s important to have a good framework on which the developers and the
researchers can compute. It’s about finding a balance between letting them have
provisioning access to those resources versus being able to keep an eye on what
they are doing. And not only from a usage perspective, but from a cost
perspective, too.
Gardner: Tell
us about HudsonAlpha and its fairly extreme IT requirements.
Mullican:
HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and
educators who apply the benefits of genomics to everyday life. We also provide
IT services and support for about 40 affiliate companies on our 150-acre campus
in Huntsville, Alabama.
Gardner: What
about the IT requirements? How you fulfill that mandate using technology?
Mullican: We
produce 6 petabytes of new data every year. We have millions of hours of
compute processing time running on our infrastructure. We have hardware
acceleration. We have direct connections to clouds. We have collaboration for
our researchers that extends throughout the world to external organizations. We
use containers, and we use multiple cloud providers.
Gardner: So
you have been doing multi-cloud before there was even a word for multi-cloud?
Mullican: We
are the hybrid-scale and hybrid IT organization that no one has ever heard of.
Gardner: Let’s
unpack some of the hurdles you need to overcome to keep all of your scientists
and researchers happy. How do you avoid lock-in? How do you keep it so that you
can remain open and competitive?
Agnostic arrangements of clouds
Mullican: It’s
important for us to keep our local datacenters agnostic, as well as our private
and public clouds. So we strive to communicate with all of our resources
through application programming interfaces (APIs), and we use open-source
technologies at HudsonAlpha. We are proud of that. Yet there are a lot of
possibilities for arranging all of those pieces.
There are a lot [of services]
that you can combine with the right toolsets, not only in your local datacenter
but also in the clouds. If you put in the effort to write the code with that in
mind -- so you don’t lock into any one solution necessarily -- then you can
optimize and put everything together.
Gardner:
Because you are a nonprofit institute, you often seek grants. But those grants
can come with unique requirements, even IT use benefits and cloud choice
considerations.
Cloud cost control, granted
Mullican:
Right. Researchers are applying for grants throughout the year, and now with
the National Institutes of Health
(NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can
immediately begin consuming resources in the cloud -- from storage to compute
-- and that cost is covered by the grant.
So they are anxious to get
started on that, which brings challenges to IT. We certainly don’t want to be
the holdup for that innovation. We want the projects to progress as rapidly as
possible. At the same time, we need to be aware of what is happening in a cloud
and not lose control over usage and cost.
Gardner:
Certainly HudsonAlpha is an extreme test bed for multi-cloud management, with
lots of different systems, changing requirements, and the need to provide the
flexibility to innovate to your clientele. When you wanted a better management
capability, to gain an overview into that full hybrid IT environment, how did you
come together with HPE and test what they are doing?
Variety is the spice of IT
Mullican: We’ve
invested in composable infrastructure and hyperconverged infrastructure (HCI)
in our datacenter, as well as blade server technology. We have a wide variety
of compute, networking, and storage resources available to us.
The key is: How do we rapidly
provision those resources in an automated fashion? I think the key there is not
only for IT to be aware of those resources, but for developers to be as well.
We have groups of developers
dealing with bioinformatics at HudsonAlpha. They can benefit from all of the
different types of infrastructure in our datacenter. What HPE OneSphere does is
enable them to access -- through a common API -- that infrastructure. So it’s
very exciting.
Gardner: What
did HPE OneSphere bring to the table for you in order to be able to
rationalize, visualize, and even prioritize this very large mixture of hybrid
IT assets?
Mullican: We
have been beta testing HPE OneSphere since October 2017, and we have tied it
into our VMware
ESX Server environment, as well as our Amazon Web Services (AWS)
environment successfully -- and that’s at an IT level. So our next step is to
give that to researchers as a single pane of glass where they can go and
provision the resources themselves.
Gardner: What
this might capability bring to you and your organization?
Cross-training the clouds
Mullican: We
want to do more with cross-cloud. Right now we are very adept at provisioning
within our datacenters, provisioning within each individual cloud. HudsonAlpha
has a presence in all the major public clouds -- AWS, Google, Microsoft Azure. But
the next step would be to go cross-cloud, to provision applications across them
all.
For example, you might have an
application that runs as a series of microservices. So you can have one
microservice take advantage of your on-premises datacenter, such as for local
storage. And then another piece could take advantage of object storage in the
cloud. And even another piece could be in another separate public cloud.
But the key here is that our
developer and researchers -- the end users of OneSphere – they don’t need to
know all of the specifics of provisioning in each of those environments. That
is not a level of expertise in their wheelhouse. In this new OneSphere way, all
they know is that they are provisioning the application in the pipeline -- and
that’s what the researchers will use. Then it’s up to us in IT to come along
and keep an eye on what they are doing through the analytics that HPE OneSphere
provides.
Gardner:
Because OneSphere gives you the visibility to see what the end users are doing,
potentially, for cost optimization and remaining competitive, you may be able
to play one cloud off another. You may even be able to automate and orchestrate
that.
Mullican:
Right, and that will be an ongoing effort to always optimize cost -- but not at
the risk of slowing the research. We want the research to happen, and to
innovate as quickly as possible. We don’t want to be the holdup for that. But
we definitely do need to loop back around and keep an eye on how the different
clouds are being used and make decisions going forward based on the analytics.
Gardner: There
may be other organizations that are going to be more cost-focused, and they
will probably want to dial back to get the best deals. It’s nice that we have
the flexibility to choose an algorithmic approach to business, if you
will.
Mullican:
Right. The research that we do at HudsonAlpha saves lives and the utmost
importance is to be able to conduct that research at the fastest speed.
Gardner: HPE
OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS,
yet they are going to be adding more clouds. And they are supporting more
internal private cloud infrastructures, and using an API-driven approach to
microservices and containers.
The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct the research at the fastest speed.
As an early tester, and
someone who has been a long-time user of HPE infrastructure, is there anything
about the combination of HPE
Synergy, HPE
SimpliVity HCI, and HPE 3PAR intelligent storage -- in
conjunction with OneSphere -- that’s given you a ‘whole greater than the sum of
the parts’ effect?
Mullican: HPE
Synergy and composable infrastructure is something that is very near and dear
to me. I have a lot of hours invested with HPE
Synergy Image Streamer and customizing open-source applications
on Image Streamer – open-source operating systems and applications.
The ability to utilize that in
the mix that I have architected natively with OneSphere -- in addition to the
public clouds -- is very powerful, and I am excited to see where that goes.
Gardner: Any
words of wisdom to others who may be have not yet gone down this road? What do
you advise others to consider as they are seeking to better compose, automate,
and optimize their infrastructure?
Get adept at DevOps
Mullican: It
needs to start with IT. IT needs to take on more of a DevOps approach.
As far as putting an emphasis
on automation -- and being able to provision infrastructure in the datacenter
and the cloud through automated APIs -- a lot of companies probably are still
slow to adopt that. They are still provisioning in older methods, and I think
it’s important that they do that. But then, once your IT department is adept
with DevOps, your developers can begin feeding from that and using what IT has
laid down as a foundation. So it needs to start with IT.
It involves a skill set change
for some of the traditional system administrators and network administrators.
But now, with software-defined networking (SDN) and with automated deployments
and provisioning of resources -- that’s a skill set that IT really needs to
step up and master. That’s because they are going to need to set the example
for the developers who are going to come along and be able to then use those
same tools.
That’s the partnership that
companies really need to foster -- and it’s between IT and developers. And
something like HPE OneSphere is a good fit for that, because it provides a
unified API.
On one hand, your IT
department can be busy mastering how to communicate with their infrastructure
through that tool. And at the same time, they can be refactoring applications
as microservices, and that’s up to the developer teams. So both can be working
on all of this at the same time.
Then when it all comes together
with a service catalog of options, in the end it’s just a simple interface.
That’s what we want, to provide a simple interface for the researchers. They
don’t have to think about all the work that went into the infrastructure, they
are just choosing the proper workflow and pipeline for future projects.
We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.
HCI advances across the globe
Mullican: Yes.
HPE OneSphere is an advantage to both of those departments, which in some
companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in
IT. It’s not a distinguished department, but in some companies that’s not the
case.
And I think we have a lot of
advantages because we think in terms of automation, and we think in terms of
APIs from the infrastructure standpoint. And the tools that we have invested
in, the types of composable and hyperconverged infrastructure, are helping
accomplish that.
Gardner: I
speak with a number of organizations that are global, and they have some data
sovereignty concerns. I’d like to explore, before we close out, how OneSphere also
might be powerful in helping to decide where data sets reside in different
clouds, private and public, for various regulatory reasons.
Is there something about
having that visibility into hybrid IT that extends into hybrid data
environments?
Mullican: Data
locality is one of our driving factors in IT, and we do have on-premises storage
as well as cloud storage. There is a time and a place for both of those, and
they do not always mix, but we have requirements for our data to be available
worldwide for collaboration.
So, the services that HPE
OneSphere makes available are designed to use the appropriate data connections,
whether that would be back to your object storage on-premises, or AWS Simple Storage Service (S3), for
example, in the cloud.
Gardner: Now
we can think of HPE OneSphere as also elevating data scientists -- and even the
people in charge of governance, risk management, and compliance (GRC) around adhering to
regulations. It seems like it’s a gift that keeps giving.
Hybrid hard work pays off
Mullican: It is
a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition
to all of the preparation work that we have done in IT around automated
provisioning with HPE Synergy and Image Streamer.
HPE OneSphere is a way to
showcase to the end user all of the efforts that have been, and are being, done
by IT. That’s why it’s a satisfying tool to implement, because, in the end, you
want what you have worked on so hard to be available to the researchers and be
put to use easily and quickly.
Gardner: It
was a long time coming, right?
Mullican: Yes,
yeah. I think so.
Gardner: I’m
afraid we will have to leave it there. We have been exploring how nonprofit
research institute HudsonAlpha is better managing its multiple cloud and hybrid
IT deployment environments. And we have learned how HPE OneSphere is delivering
consolidated and deep insights across multiple clouds and IT deployments at
HudsonAlpha, an early beta tester and user.
So please join me in thanking
our guest, Katreena Mullican, Senior Architect and Cloud Whisperer at
HudsonAlpha Institute for Biotechnology.
Mullican: Thank
you very much.
Gardner: And a
big thank you to our audience as well for joining us for this BriefingsDirect
Voice of the Customer digital transformation success story. I’m Dana Gardner,
Principal Analyst at Interarbor Solutions, your host for this ongoing series of
Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening.
Please pass this content along to your IT community and do come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript.
Sponsor: Hewlett Packard Enterprise.
Transcript
of a discussion on how HudsonAlpha is testing a new Hewlett Packard Enterprise
solution, OneSphere, to gain a simple and more common interface to manage
hybrid computing. Copyright Interarbor Solutions, LLC, 2005-2018. All rights
reserved.
- South African insurer King Price gives developers the royal treatment as HCI meets big data
- Containers, microservices, and HCI help governments in Norway provide safer public data sharing
- Big data and cloud combo spark momentous genomic medicine advances at HudsonAlpha
- Pay-as-you-go IT models provide cost and operations advantages for Northrop Grumman
- Ericsson and HPE accelerate digital transformation via customizable mobile business infrastructure stacks
- A tale of two hospitals—How healthcare economics in Belgium hastens need for new IT buying schemes
- How VMware, HPE, and Telefonica together bring managed cloud services to a global audience
- Retail gets a makeover thanks to data-driven insights, edge computing, and revamped user experiences
- Inside story on HPC's role in the Bridges Research Project at Pittsburgh Supercomputing Center
- How UBC gained TCO advantage via flash for its EduCloud cloud storage service
No comments:
Post a Comment