Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Dana Gardner: Hello, and welcome to the next edition to the Hewlett Packard Enterprise (HPE) Voice of the Customer podcast series. I’m Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on IT Innovation -- and how it's making an impact on people's lives.
Gardner |
We'll learn how scaling of customized IT infrastructure for a hosting organization in a multi-tenant environment benefits from flexibility of hardware licensing, and gains the confidence that storage supply will always meet dynamic demand.
To describe how massive storage and data-center infrastructure needs can be met in a just-in-time manner, we're joined by Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. Welcome, Eric.
Eric Hulbert: Thank you for having me, Dana.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Hulbert: Our biggest requirement was high-availability in multi-tenancy. That was number one, because we're service providers and we have to meet the needs of a lot of customers, not just a single enterprise or even enterprises with multiple business groups.
Hulbert |
We were tired of the monolithic controller-bound SANs, where we'd have to buy a specific bigger size. We'd start to get close to where the boundary would be and then we would have to do a lift-and-shift upgrade, which is not easy to do with almost a thousand customers.
Ultimately, we made the choice to go to one of the first software-defined storage architectures, which is a company called LeftHand Networks, later acquired by HPE, and then some 3PAR equipment, also acquired by HPE. Those were, by far, the biggest factors while we made that selection on our storage platform.
Gardner: Give us a sense of the scale-out requirements.
Hulbert: We have three primary data centers in the Pacific Northwest and one in Dallas, Texas. We also have the ability for a little bit of space in New York, for some of our East Coast customers, and one in San Jose, California. So, we have five data centers in total.
Gardner: Is there a typical customer, or a wide range of customers?
Big range
Hulbert: We have a pretty big range. Our typical customers are in finance and travel and tourism, and the hospitality industries. There are quite a few in there. Healthcare is a growing vertical for us as well.
Then, we rounded out with manufacturing and little bit of retail. One of our actual verticals, if you could call it vertical, are the MSPs and IT companies, and even some VARs, that are moving into the cloud.
We enable them to do their managed services and be the "boots on the ground" for their customers. That spreads us into the tens of thousands of customers, because we have about 30 to 25 MSPs that work with us throughout the country, using our infrastructure. We just provide the infrastructure as a service, and that's been a pretty growing vertical for us.
Gardner: And then, across that ecosystem, you're doing colocation, cloud hosting, managed services? What's the mix? What’s the largest part of the pie chart in terms of the services you're providing in the market?
Hulbert: We're about 75 percent cloud hosting, specifically a VMware-based private cloud, a multi-tenant private cloud. It's considered public cloud, but we call it private cloud.
We do a lot of hybrid cloud, where we have customers that are doing bursting into Amazon or [Microsoft] Azure. So, we have the ability to get them either Direct Connect Amazon connections or Azure ExpressRoute connections into any of our data centers. Then, 20 percent is colocation and about 5 percent for back-up, and disaster recovery (DR) rounds that out.
Gardner: Everyone, it seems, is concerned about digital disruption these days. For you, disruption is probably about not being able to meet demand. You're in a tight business, a competitive business. What’s the way that you're looking at this disruption in terms of your major needs as a business? What are your threats? What keeps you up at night?
Still redundant
Hulbert: Early on, we wanted a concurrently maintainable infrastructure, which also follows through with the data centers that we're at. So, we needed Tier 3-plus facilities that are concurrently maintainable. We wanted the infrastructure be the same. We're not kept up at night, because we can take an entire section of our solution offline for maintenance. It could be a failure, but we're still redundant.
It's a little bit more expensive, but we're not trying to compete with the commodity hosting providers out there. We're very customized. We're looking for customers that need more of that high-touch level of service, and so we architect these big solutions for them -- and we host with a 100 percent up-time.
The infrastructure piece is scalable with scale-out architecture on the storage side. We use only HP blades, so that we just keep stacking in blades as we go. We try to stay a couple of blade chassis ahead, so that we can take pretty large bursts of that infrastructure as needed.
That's the architecture that I would recommend for other service providers looking for a way to make sure they can scale out and not have to do any lift-and-shift on their SAN, or even the stack and rack services, which take more time.
We have to cable all of them versus needing to do one-blade chassis. Then, you can just slot in 16 blades quickly, as you're scaling. That allows you to scale quite a bit faster.
We use only HP blades,
so that we just keep stacking in blades as we go. We try to stay a
couple blade chassis ahead, so that we can take pretty large bursts of
that infrastructure as needed.
Gardner: When it comes to making the choice for software-defined, what has that gotten you? I know people are thinking about that in many cases -- not just service providers, but enterprises. What did service-defined storage get for you, and are you furthering your software-defined architecture to more parts of your infrastructure?
Hulbert: We wanted it to be software-defined because we have multiple locations and we wanted one pane of glass. We use HPE OneView to manage that, and it would be very similar for an enterprises. Say we have 30 remote offices, they want to put the equipment there, and the business units need to provision some service and storage. We want to be going to each individual appliance or chassis or application in one place to provision it all.
Since we're dealing now with nearly a thousand customers -- and thousands and thousands of virtual servers, storage nodes, and all of that, the chunklets of data are distributed across all these. Being able to do that from one single pane of the glass from a management standpoint is quite important for us.
So, it's that software-defined aspect, especially distributing the data into chunklets, which allows us to grow quicker, and putting a lot of automation on the back-end.
We only have 11 system administrators and engineers on our team managing that many servers, which shows you that our density is pretty high. That only works well if we have really good management tools, and having it software-defined means fewer people walking to and from the data center.
Even though our data centers are manned facilities, our infrastructure is basically lights out. We do everything from remote terminals.
Gardner: And does this software-defined extend across networking as well? Are you hyper-converged, converged? How would you define where you're going or where you'd like to go?
Converged infrastructure
Hulbert: We're not hyper-converged. For our scale, we can’t get into the prepackaged hyper-converged product. For us, it would be more of a converged infrastructure approach.
As I said, we do use the c-Class blade chassis with Virtual Connect, which is software-defined networking. We do a lot of VLANs and things like that on the software side.
We till have some outside of that out of band, networking, the network stacks, because we're not just a cloud provider. We also do colocation and a lot of hybrid computing where people are connecting between them. So, we have to worry about Fibre Channel on iSCSI and connections in SAN.
That adds a couple of other layers that are a few extra management steps, but in our scale, it’s not like we're adding tens of thousands of servers a day or even an hour, as I'm sure Amazon has to. So we can take that one small hit to pull that portion of the networking out, and it works pretty good for us.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Hulbert: Absolutely. One of the first ones I mentioned earlier was the ability for customers that want to burst into public cloud to be able to do the Amazon Direct Connects. Even with the telecom providers back on, you're looking at 15 to 25 milliseconds latency. For some of these applications, that’s just too much latency. So, it’s not going to work.
Now, with the most recent announcement from Amazon, they put a physical Direct Connect node in Oregon, about a mile from our data-center facility. It's from EdgeConneX, who we partnered with.
Now, we can offer the lowest latency for both Amazon and Azure ExpressRoute in the Pacific Northwest, specifically in Oregon. That’s really huge for our customers, because we have some that do a lot of public-cloud bursting on bold platforms. So that’s one new offering we are doing.
Disruption, as we've heard, is around containers. We're launching a new container-as-a-service platform later this year based on ContainerX. That will allow us to do containers for both Windows or Starnix platforms, regardless of what the developers are looking for.
We're targeting developers, DevOps guys, who are looking to do microservices to take their application, old or new, and architect it into the containers. That’s going to be a very disruptive new offering. We've been working on a platform for a while now because we have multiple locations and we can do the geographic dispersion for that.
I think it’s going to take a little bit of the VMware market share over time. We're primarily a VMware shop, but I don’t think it’s going to be too much of an impact to us. It's another vertical we're going to be going after. Those are probably the two most important things we see as big disruptive factors for us.
Hybrid computing
Gardner: As an organization that's been deep into hybrid cloud and hybrid computing, is there anything out there in terms of the enterprises that you think they should better understand? Are there any sort of misconceptions about hybrid computing that you detect in the corporate space that you would like to set them straight on?
Hulbert: The hybrid that people typically hear about is more like having on-premises equipment. Let’s say I'm a credit union and I’ve got one of the bank branches that we decided to put three or four cabinets of our equipment and one on the vaults. Maybe they've added one UPS and one generator, but it’s not to the enterprise level, and they're bursting to the public cloud for the things that makes sense to meet their security requirements.
To me, that’s not really the best use of hybrid IT. Hybrid IT is where you're putting what used to be on-premises in an actual enterprise-level, Tier 3 or higher data center. Then, you're using either a form of bursting into private dedicated cloud from a provider in one of those data centers or into the public cloud, which is the most common definition of that hybrid cloud. That’s what I would typically define as hybrid cloud and hybrid IT.
Gardner: What I'm hearing is that you should get out of your own data center, use somebody else's, and then take advantage of the proximity in that data center, the other cloud services that you can avail yourself of.
Then,
you're using either a form of bursting into private dedicated cloud
from a provider in one of those data centers or into the public cloud
which is the most common definition of that hybrid cloud.
Hulbert: Absolutely. The biggest benefit to them is at their individual location or bank branches. This the scenario where we use the credit union. They're going to have maybe one or two telco providers, and they're going to be their 100 or maybe 200 Mb-per-second circuits.
They're paying a pretty premium for them, and now when they get into one of these data centers, they're going to have the ability to have 10-gig or even 40- or 100-gig connected internet pipes with a lot higher headroom for connectivity at a better price point.
On top of that, they'll have 10-gig connection options into the cloud, all the different cloud providers. Maybe they have an Oracle stack that they want to put on an Oracle cloud some day along with their own on- premises. The hybrid things get more challenging, because now, they're not going to get the connectivity they need. Maybe they want to be into the software, they want to do an Amazon or Azure, or maybe they want a Opus cloud.
They need faster connectivity for that, but they have equipment that still has usable life. Why not move that to an enterprise-grade data center and not worry about air conditioning challenges, electrical problems, or whether it’s secure.
All of these facilities, including ours, have every checkbox for compliance and auditing that happens on an annual basis. Those things that used to be really headaches aren’t core of their business. They don’t do those any more. Focus on what's core, focus on the application and their customers.
Gardner: So proximity still counts, and probably will count for an awfully long time. You get benefits from taking advantage of proximity in these data centers, but you can still have, as you say, what you consider core under your control, under your tutelage and set up your requirements appropriately?
Mature model
Hulbert: It really comes down to the fact that the cloud model is very mature at this point. We’ve been doing it for over a decade. We started doing cloud before it was even called cloud. It was just virtualization. We launched our platform in late 2005 and it proved out, time and time again, with 100 percent up-time.
We have one example of a large customer, a travel and tourism operator, that brings visitors from outside the US to the US. They do over a $1 billion a year in revenue, and we host their entire infrastructure.
It's a lot of infrastructure and it’s a very mature model. We've been doing it for a long time, and that helps them to not worry about what used to be on-premises for them. They moved it all. A portion of it is colocated, and the rest is all on our private cloud. They can just focus on the application, all the transactions, and ultimately on making their customers happy.
Gardner: Going back to the storage equation, Eric, do you have any examples of where the storage software-defined environment gave you the opportunity to satisfy customers or price points, either business or technical metrics that demonstrate how this new approach to storage particularly fills out this costs equation?
The
ability to easily provision the different sized data storage we need
for the virtual servers that are running on that is absolutely
paramount.
Hulbert: In terms of the software-defined storage, the ability to easily provision the different sized data storage we need for the virtual servers that are running on that is absolutely paramount.
We need super-quick provisioning, so we can move things around. When you add in the layers of VMware, like storage vMotion, we can replicate volumes between data centers. Having that software-defined makes that very easy for us, especially with the built-in redundancy that we have and not being controller-bound like we mentioned earlier on.
Those are pretty key attributes, but on top of that , as customers are growing, we can very easily add more volumes for them. Say they have a footprint in our Portland facility and want to add a footprint in our Dallas, Texas facility and do geographic load balancing. It makes it very easy for us to do the applications between the two facilities, slowly adding on those layers as customers need to grow. It makes that easy for them as well.
Gardner: One last question, what comes next in terms of containers? What we're seeing is that containers have a lot to do with developers and DevOps, but ultimately I'd think that the envelope gets pushed out into production, especially when you hear about things like composable infrastructure. If you've been composing infrastructure in the earlier part of the process and development, it takes care of itself in production.
Do you actually see more of these trends accomplishing that where production is lights-out like you are, where more of the definition of infrastructure and applications, productivity, and capabilities is in that development in DevOps stage?
Virtualization
Hulbert: Definitely. Over time, it is going to be very similar to what we saw when customers were moving from dedicated physical equipment into the cloud, which is really virtualization.
This is the next evolution, where we're moving into containers. At the end of the day, the developers, the product managers for the applications for whatever they're actually developing, don't really care what and how it all works. They just want it to work.
They want it to be a utility consumption-based model. They want the composable infrastructure. They want to be able to get all their microservices deployed at all these different locations on the edge, to be close to their customers.
Containers are going to be a great way to do that because they have all the overhead of dealing with the operations knowledge. So, they can just put these little APIs and the different things that they need where they need it. As we see more of that stuff pushed to the edge to get the eyeball traffic, that’s going to be a great way to do that. With the ability to do even further bursting and into the bigger public clouds worldwide, I think we can get to a really large scale in a great way.
Software Defined Storage
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
Eliminate Complexity and Free Infrastructure
From the Limitations of Dedicated Hardware
So please join me in thanking our guest, Eric Hulbert, CEO at Opus Interactive in Portland, Oregon. Thank you, Eric.
Hulbert: Thank you very much. I appreciate it.
Gardner: And I'd also like to thank our audience as well for joining us for this Hewlett-Packard Enterprise Voice of the Customer Podcast. I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host for this ongoing series of HPE-sponsored discussions. Thanks again for listening, and come back next time.
Listen to the podcast. Find it on iTunes. Get the mobile app. Download the transcript. Sponsor: Hewlett Packard Enterprise.
Transcript of a discussion on scaling benefits from improved storage infrastructure at a multi-tenant hosting organization. Copyright Interarbor Solutions, LLC, 2005-2016. All rights reserved.
You may also be interested in:
- Infrastructure as destiny — How Purdue builds an IT support fabric for big data-enabled IoT
- Securing data provides Canadian online bank rapid path to new credit card business
- How Allegiant Air solved its PCI problem and got a whole lot better security culture, too
- How European GDPR compliance enables enterprises to both gain data privacy and improve their bottom lines
- DevOps and security, a match made in heaven
- Alation centralizes data knowledge by employing machine learning and crowdsourcing
- Expert panel explores the new reality for cloud security and trusted mobile apps delivery
- Catbird CTO on why new security models are essential for highly virtualized data centers
- Intralinks Uses Hybrid Cloud to Blaze a Compliance Trail Across the Regulatory Minefield of Data Soveriegnty
- 451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery
- Business unusual: How the Dell-EMC merger sends shockwaves across the global storage market
- Playtika bets on big data analytics to deliver captivating social gaming experiences
- Capgemini and HPE Team Up to Foster Behavioral Change That Brings Better Cyber Security Across Application Lifecycles