Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption -- in favor of innovation.
Our next cloud strategies insights interview focuses on how IT architecture and new breeds of service providers are helping enterprises manage complex cloud scenarios. We’ll now learn how composable infrastructure and auto-scaling help improve client services, operations, and business goals attainment for a New York cloud support provider.
Here to help us learn what's needed to reach the potential of multiple -- and often overlapping -- cloud models is Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) Ltd. in New York. Welcome, Arthur.
Arthur Reyenger: Thank you so much, I really appreciate being on the show.
Gardner: How are IT architecture and new breeds of service providers coming together? What’s different now from just a few years ago for architecture when we have cloud, multi-cloud, and hybrid cloud services?
Reyenger |
It’s changed now to where it’s a 12-month strategy that needs to be modular enough to be reevaluated at the end of those 12 months, and be re-architected -- almost as if it were made of Lego blocks.
Gardner: More moving parts, less time.
Reyenger: Absolutely.
Gardner: How do you accomplish that?
Reyenger: You leverage different cloud service providers, different managed services providers, and traditional value-added resellers, like International Integrated Solutions (IIS), in order to meet those business demands. We see a large push around automation, orchestration and auto-scaling. It’s becoming a way to achieve those business initiatives at that higher speed.
Gardner: There is a cloud continuum. You are choosing which workloads and what data should be on-premises, and what should be in a cloud, or multi-clouds. Trying to do this as a regular IT shop -- buying it, specifying, integrating it -- seems like it demands more than the traditional IT skills. How is the culture of IT adjusting?
Reyenger: Every organization, including ours, has its own business transformation that they have to undergo. We think that we are extremely proactive. I see some companies that are developing in-house skill sets, and trying to add additional departments that would be more cloud-aware in order to meet those demands.
On the other side, you have folks that are leveraging partners like IIS, which has acumen within those spaces to supplement their bench, or they are building out a completely separate organization that will hopefully take them to the new frontier.
Gardner: Tell us about your company. What have you done to transform?
Get the
Updated Book
Reyenger: IIS has spent 26 years building out an amazing book of business with amazing relationships with a lot of enterprise customers. But as times change, you need to be able to add additional practices like our cloud practice and our managed services practice. We have taken the knowledge we have around traditional IT services and then added in our internal developers and delivery consultants. They are very well-versed and aware of the new architecture. So we can marry the two together and help organizations reach that new end-state.
It's very easy for startups to go 100 percent to the cloud and just run with it. It’s different when you have 2,000 existing applications and you want to move to the future as well. It’s nice to have someone who understands both of those worlds -- and the appropriate way to integrate them.
Gardner: I suppose there is no typical cloud engagement, but what is a common hurdle that organizations are facing as they go from that traditional IT mindset to the more cloud-centric thinking and hybrid deployment models?
The cloud answer
Reyenger: The
concept of auto-scaling or bursting has become very, very prevalent. You see
that within different lines of business. Ultimately, they are all asking for
essentially the same thing -- and the cloud is a pretty good answer.
At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.
We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.
And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives.
Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.
How are you working with companies like Hewlett Packard Enterprise (HPE), for example, to provide composable auto-scaling capabilities with the look and feel of public cloud on their private cloud?
At the same time, you really need to understand your business and the triggers. You need to be able to put the necessary intelligence together around those capabilities in order to make it really beneficial and align to the ebbs and flows of your business. So that's been one of the very, very common requests across the board.
We've built out solutions that include intellectual property from IIS and our developers, as well as cloud management tools built around backup to the cloud to eliminate tape and modernize backup for customers. This builds out a dedicated object store that customers can own that also tiers to the different public cloud providers out there.
And we’ve done this in a repeatable fashion so that our customers get the cloud consumption look and feel, and we’ve leveraged innovative contractual arrangements to allow customers to consume against the scope of work rather than on lease. We’ve been able to marry that with the different standardized offerings out there to give someone the head start that they need in order to achieve their objectives.
Gardner: You brought up the cloud consumption model. Organizations want the benefit of a public cloud environment and user experience for bursting, auto-scaling, and price efficiency. They might want to have workloads on-premises, to use a managed service, or take advantage of public clouds under certain circumstances.
How are you working with companies like Hewlett Packard Enterprise (HPE), for example, to provide composable auto-scaling capabilities with the look and feel of public cloud on their private cloud?
Get the
Updated Book
Reyenger: Now it’s
becoming a multi-cloud strategy. It’s one thing to say only on-premises and
using one cloud. But using just one cloud has risk, and this is a problem.
We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.
We try to standardize everything through a single cloud management stack for our customers. We’re agnostic to a whole slew of toolsets around both orchestration and automation. We want to help them achieve that.
Intelligent platform performance
We looked at some of the very
unique things that HPE has done, specifically around their Synergy
platform, to allow for cloud management and cloud automation to deliver true composable
infrastructure. That has huge value around energizing a company’s goals,
strengthening their profitability, boosting productivity, and enhancing
innovation. We've been able to extend that into the public cloud. So now we
have customers that truly are getting the best of both worlds.
Composable
infrastructure is having true infrastructure that you can deploy as
code. It’s being able to standardize on a single RESTful API set.
Gardner: How do
you define composable
infrastructure?
Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.
That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.
Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.
We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on.
Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?
Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.
But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment.
Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?
Reyenger: It’s having true infrastructure that you can deploy as code. You’ll hear a lot of folks say that and what it really means is being able to standardize on a single RESTful API set.
That allows your platform to have intelligence when you look at infrastructure as a service (IaaS), and then delivering things as either platform (PaaS) or software as a service (SaaS) -- from either a DevOps approach, or from the lines of business directly to consumers. So it’s the ability to bridge those two worlds.
Traditionally, you may have underlying infrastructure that doesn't have the intelligence or doesn't have the visibility into the cloud automation. So I may be scaling, but I can't scale into infinity. I really need an underlying infrastructure to be able to mold and adapt in order to meet those needs.
We’re finally reaching the point where we have that visibility and we have that capability, thanks to software-defined data center (SDDC) and a platform to ultimately be able to execute on.
Gardner: When I think about composable infrastructure, I often wonder, “Who is the composer?” I know who composes the apps, that’s the developer -- but who composes the infrastructure?
Reyenger: This gets to a lot of the digital transformation that we talked about in seeking different resources, or cultivating your existing resources to gain more of a developer’s view.
But now you have IT operations and DevOps both able to come under a single management console. They are able to communicate effectively and then script on either side in order to compose based on the code requirements. Or they can put guardrails on different segments of their workloads in order to dictate importance or assign guidelines. The developers can ultimately make those requests or modify the environment.
Gardner: When you get to composable infrastructure in a data center or private cloud, that’s fine. But that’s sort of like 2D Chess. When I think about multi-cloud or hybrid cloud -- it’s more like 3D Chess. So how do I compose infrastructure, and who is the composer, when it comes to deciding where to support a workload in a certain way, and at what cost?
Consult before composing
Reyenger: We offer
a series of consulting services around the delivery of managed services and the
actual development to take an existing cloud
management stack -- whether that is Red Hat CloudForms,
vRealize from
VMware, or Terraform -- it really
doesn't matter.
We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.
Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.
There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.
We are ultimately allowing that to be the single pane of glass, the single console. And then because it’s RESTful API integrations into those public cloud providers, we’re able to provide that transparency from that management interface, which mitigates risk and gives you control.
Then we deploy things like Puppet, Chef, and Ansible within those different virtual private clouds and within those public cloud fabrics. Then, using that cloud management stack, you can have uniformity and you can take that composition and that intelligence and bring it wherever you like -- whether that's based on geography or a particular cloud service provider preference.
There are many different ways to ultimately achieve that end-state. We just want to make sure that that standardization, to your point, doesn’t get lost the second you leave that firewall.
Get the
Updated Book
Gardner: We are
in the early days of composability of infrastructure in a multi-cloud world.
But as the
complexity and scale increases, it seems likely to me that we are going to
need to bring things like machine learning and artificial
intelligence (AI) because humans doing this manually will run out of
runway.
Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process?
Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.
And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.
Projecting into the future, do you see a role for an algorithmic, programmatic approach putting in certain variables, certain thresholds, and contextual learning to then make this composable infrastructure capability part of a machine process?
Reyenger: The things that companies like HPE have done, and their new acquisition, Nimble, as well as at Red Hat, and several others in the industry, to leverage the intelligence they have from all of their different support calls and lifecycle management across applications allows them to provide feedback to the customer.
And in some cases, if you are tying it back from an automation engine that will actually give you the information as to how to solve your problem. A lot of the precursors to what you are talking about are already in the works and everyone is trying to be that data-cloud management company.
We will see more of that single pane of glass that they will leverage across multiple cloud providers.
It's really early to ultimately pick
favorites, but you are going to see more standardization. Rather than having 50
different RESTful APIs that everyone is standardizing on and that are
constantly changing, so that I have to provide custom integrations. What we
will see is more of that single pane of
glass they will leverage across multiple cloud providers. That will
leverage a lot of the same automation and orchestration toolsets that we talked
about.
Gardner: And HPE has their sights set on this with Project New Hybrid IT Stack?
Reyenger: 100 percent.
Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes?
Gardner: And HPE has their sights set on this with Project New Hybrid IT Stack?
Reyenger: 100 percent.
Gardner: Looking at composable infrastructure, auto-scaling, using things like HPE Synergy, if you’re an enterprise and you do this right, how do you take this up to the C-Suite and say, “Aha, we told you so. Now give us more so we can do more”? In other words, how does this improve business outcomes?
Fulfilling the promise
Reyenger: Every
organization is different. I’ve spent a good chunk of my career being
tactically deployed within very large organizations that are trying to achieve
certain goals.
For me, I
like to go to a customer’s 10-K SEC filing and look at the promises they’ve
made to their investors. We want to ultimately be able to marry back what this
IT investment will do for the short-term goals that they are all being judged
against, as well as from both the key performance indicators (KPI) standpoint
and from the health of the company.
It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.
These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value.
Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.
It means meeting DevOps challenges and timelines, ruling out new green space workload issues, and taking data that sits within traditional business intelligence (BI) relational databases and giving access to some of that data to different departments. They should be able to run big data analytics against that data from those departments in real-time.
These are the types of testing methodologies that we like to set up so that we can help a customer actually rationalize what this means today in terms of dollars and cents and what it could mean in terms of that perceived value.
Gardner: When you do this well, you get agility, and you get to choose your deployment models. It seems to me that there's going to be a concept that arises of minimal viable cloud, or hybrid cloud.
Are we
going to see IT costs at an operating level adjusted favorably? Is this
something that ultimately will be so optimized -- with higher utilization,
leveraging the competitive market for cloud services -- that meaningful
decreases will occur in the total operating costs of IT in an organization?
An uphill road to lower IT costs
Reyenger: I
definitely think that it’s quite possible. The way that most organizations are
set up today, IT operations rolls back into finance. So if you sit underneath
the CFO, like most organizations do, and a request gets made by marketing or
sales or another line of business -- it has to go up the chain, get translated,
and then come back down.
A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.
But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.
A lot of times it's difficult to push a rock up a hill. You don’t have all the visibility unless you can get back up to finance or back over to that line of business. If you are able to break down those silos, then I believe that your statement is 100 percent true.
But changing all of those internal controls for a lot of these organizations is very difficult, which is why some are deploying net-new teams to be ultimately the future of their internal IT service provider operations.
Get the
Updated Book
Gardner: Arthur,
I have been in this business long enough to know that every time we’ve gotten
into the point where we think we are going to meaningfully decrease IT costs,
some other new paradigm of IT comes up that requires a whole new round of
investment. But it seems to me that this could be different this time, that we
actually are getting to a standardized approach for supporting workloads and
that traditional economics that impact any procurement service will become in
effect here, too.
Mining to minimize risk
Reyenger:
Absolutely. One of our big pushes has been around object storage. This
still allows for traditional file- and block-level support. We are trying to
help customers achieve that new economic view -- of which cloud approach
ultimately provides them that best price point, but still gives them low risk,
visibility, and control over their data.
I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”
And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service.
Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.
I will give you an example. There is a very large financial exchange that had a lot of intellectual property (IP) data that they traditionally mined internally, and then they provided it back to different, smaller financial institutions as a service, as financial reports. A few years back, they came to us and said, “I really want to leverage the agility of Amazon Web Services (AWS) in terms of being able to spin up a huge Hadoop form and mine this data very, very quickly -- and leverage that without having to increase my overall cost. But I don’t feel comfortable providing that data into S3 within AWS, where now they have two extra copies of my data as part of the service level agreement. So what do I do?”
And we ultimately stood up the same object storage service next to AWS, so you wouldn’t have to pay any data eviction fees, and you could mine everything right there, leveraging the AWS Redshift, or Hadoop-as-a-service.
Then once these artifacts, or these reports, were created, they no longer had the IP. The reports came from the IP, but these are all roll-ups and comparisons, and now they are not sensitive to the company. We went ahead and put those into S3 and allowed Amazon to manage all of their customers’ identity and access management to go ahead and get access to that -- and that all minimized risk for this exchange. We are able to prevent anyone outside of the organization to get behind the firewall to get at their data. You don’t have to worry about the SLAs associated with keeping this stuff up and available and it became a really nice hybrid story.
We help customers gain all the benefits associated with cloud – without taking on any of the additional risk.
These are
the types of projects that we really like to work on with customers, to be able
to help them gain all the benefits associated with cloud – without taking on
any of the additional risk, or the negatives, associated with jumping into
cloud with both feet.
Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation?
Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.
A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged.
Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?
Gardner: You heard your customers, you saw a niche opportunity for object storage as a service, and you have put that together. I assume that you want a composable infrastructure to do that. So is this something on a HPE Synergy a future foundation?
Reyenger: HPE Synergy doesn’t really have the disk density to get to the public cloud price point, but it does support object storage natively. So it's great from a DevOps standpoint for object storage. We definitely think that as time progresses and HPE continues down the Synergy roadmap that that cloud role will eventually fix itself.
A lot of the cloud role is centered on hyper-converged infrastructure. And in this kind of mantra, I don’t see compute and storage growing at the same rates. I see storage growing considerably faster than the need for compute. So this is a way for us to be able to help supplement a Synergy deployment, or we can help our customers get the true ROI/TCO they are looking for out of the hyper-converged.
Gardner: So maybe the question I should ask is what storage providers are you using in order to make this economically viable?
Get the
Updated Book
We leverage things like automation and orchestration technologies, and our ServiceNow capabilities -- all married with our RIP in order to give customers the choice of buying this, deploying it, and having us layer services on top if you want or if you want to consume a fully managed service for something that’s on-premises. I have a per-GB price and the same SLAs as those public cloud providers. So all of it’s coming together to allow customers to really have the true choice and flexibility that everyone claimed you could years ago.
Gardner: I’m afraid we will have to leave it there. We have been exploring how IT architecture and new breeds of service providers are helping enterprises better manage their complex cloud requirements. And we learned how a composable infrastructure and auto-scaling capability have helped a New York cloud services company put together innovative object storage as a service offerings.
So please join me in thanking our guest, Arthur Reyenger, Cloud Practice Lead and Chief Cloud Architect at International Integrated Solutions (IIS) in New York. Thank you, Arthur.
Reyenger: Thank you, Dana. I really appreciate being part of your program.
Gardner: And thanks to our audience as well for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews.
Thanks again for listening. Please pass this content along to your IT community, and do come back next time.
Listen to the podcast. Find it on iTunes.
Get the mobile app.
Download
the transcript. Sponsor: Hewlett Packard
Enterprise.
Transcript
of a discussion on how IT architecture and new breeds of service providers are
helping enterprises manage complex cloud scenarios. Copyright Interarbor
Solutions, LLC, 2005-2017. All rights reserved.
You
may also be interested in:
- As enterprises face mounting hybrid IT complexity, new management solutions beckon
- How mounting complexity, multi-cloud sprawl, and need for maturity hinder hybrid IT's ability to grow and thrive
- Get ready for the Post-Cloud World
- Inside story on HPC’s AI role in Bridges 'strategic reasoning' research at CMU
- Philips teams with HPE on ecosystem approach to improve healthcare informatics-driven outcome
- Inside story: How Ormuco abstracts the concepts of private and public cloud across the globe
- How Nokia refactors the video delivery business with new time-managed IT financing models
- IoT capabilities open new doors for Miami telecoms platform provider Identidad IoT
- Inside story on developing the ultimate SDN-enabled hybrid cloud object storage environment
- How IoT and OT collaborate to usher in the data-driven factory of the future
- DreamWorks Animation crafts its next era of dynamic IT infrastructure
No comments:
Post a Comment