Introducing On-Demand OpenStack Private Clouds and Initial Use Cases

InMotion Hosting has recently brought to market an automated deployment of OpenStack and Ceph that we sell as on-demand Private Cloud and as part of our Infrastructure as a Service.  We believe that making OpenStack more accessible is critical to the health of the OpenStack community as it will allow smaller teams a low-risk and cost-effective place to learn and run OpenStack.

As this is a new technology, it is necessary to provide some context and possible market position.  Based on the speed of delivery of the cloud, we selected “on demand” internally and have continued to use it publicly.

On-Demand Private Cloud Defined

Closed source On-Demand Private Clouds emerged in 2018 from traditional industry names building from their history in on-premise private clouds. VMWare and Nutanix, for example, accomplished this through partnerships with large public clouds. Open source on-demand private clouds emerged in late 2020 and are currently playing catchup to the closed source solution.

The features and functionality of a full OpenStack are typically considered to be ahead of the closed source solutions.  Critically though, the closed source solutions were significantly easier to setup and had a much more predictable likelihood of success.  This gave the closed source offering a significant advantage and thus a better fit for smaller companies.

Time to Utilization/Time to Production

As we considered overcoming this disadvantage, we applied the Agile philosophy around delivery time. Consider the “time to production” as measured by a recognized need for resources to the launch of a VM or Container to meet that need.  For our private clouds, time to utilization was commonly measured in quarters before 2016, then months in 2016-2017, then weeks in 2018-2019, then in 2020, it fell quickly from weeks to minutes.

Provision time is now between 30 and 45 minutes for a 3 server cluster depending on the type of hardware and install complexity.  NVMe is faster than SATA SSD. More included OpenStack Components are slower.  The next development cycles are not focused on faster delivery but on more included OpenStack Components in the same sub-1-hour window. Moving from 45 minutes to something sub 20 minutes is possible but it does not appear to be very important to customers at this time.

Usage Based Billing

In addition to fast “time to utilization”, usage-based billing is a key part of being on-demand.  With the spin-up being under an hour, it allows for usage by the hour. Though the usage-based billing is somewhat limited in value for long-term production environments, it is a required attribute for variable workloads, PoCs, testing, and training.

Small and Low Cost Building Blocks

Finally, small building blocks are a critical part of being on-demand. Currently, the smallest full private cloud  is composed of 3 hyper-converged servers each with a single CPU and a single SATA SSD storage OSD. This creates a HA Ceph storage-backed OpenStack cloud billed per hour with only a 1-hour minimum.

Summarizing the definition again is simply fast time to utilization, usage based billing, and small building blocks.  For the future, it also sets up a way to compare against public mega-clouds for ease of spinning up VMs or Containers, running the workload, turning them off, and paying for what you have used.

First Use Cases for On-Demand Private Cloud

The initial use case we have both placed or are actively selling falls into the following categories.  Please note these are initial and extrapolated findings based on a limited set of data.  Combined with our experience in the market, these are the following use cases we expect to have large traction in the next several years, but it is early in the data.

Production Private Clouds with Managed Support Level

Many companies are feeling the pressure to “move to the cloud” from both on-site resources and traditional hosting providers.  We found that many companies would prefer to work within a closed system that gives you access to cloud functionality without having to learn how to control costs at the service, VM, or other micro-levels.

This group has so far wanted to focus on using the cloud versus running and using the cloud.  We are adding managed services formally now to accommodate these customers.

The hyper-converged hardware has fit this use case well.  Of note, ML and AI customers are also very interested in hardware components like GPUs and we will be adding those to our catalog in the coming months.

On-Demand OpenStack Cloud for Training and R&D Purposes

This was a use case we expected.  In days past, creating a high-quality OpenStack required a group of skilled System Engineers. This includes specialists in hardware, networks, security, and Linux. It is a stretch for a medium business to have these skills on staff and unlikely a small business will have more than 1 of them.

Even with the skilled group, most will not have experience with OpenStack. In order to learn to run a private cloud, the IT team has to convince their company to finance a “Pilot Program” of the potential cloud. Before on-demand OpenStack, those clouds can cost hundreds of thousands of dollars in server and network gear, plus 3-12 months’ worth of time.  And with that, many, maybe even the majority, of the pilots never turn into a production cloud.

Many enterprise-focused companies, like Redhat, Canonical, and Accenture, successfully help enterprises bridge that gap economically.  For smaller IT teams, they simply couldn’t access the benefits and cost savings of private cloud.

With the advent of the trivial on-demand private cloud providers, the two most considerable issues have been overcome.  Now, these users can learn with on-demand OpenStack and regardless of where the deployment goes, they have cut significant time and risk out of using OpenStack.

Proof of Concept of workloads from the Public Cloud

With the cost of the mega-clouds being so high, it is natural this use case is and will be significant.  At this time, we are not sure if these users will prefer to have a managed private cloud or will take the steps to assign staff in their company to become Cloud Operators.  

Currently, we are actively pursuing the latter as this gives the company the lowest cost options.  We also see improvements in the ease of being a private cloud Operator with key advancements like Containerized Control Planes.  A company with a reasonably skilled systems team can take on the Cloud Operators duties, increase their own company prowess, and save money.

Data Center Providers adding On-Demand Private Cloud as an offering

Data Center providers used to make significant portions of their revenue from smaller customers that often would just purchase ¼ and ½ racks.  Much of that type of business has moved away from direct purchases in data centers to either Mega-Clouds or to bare metal hosting providers.  

To offset this pattern, very large Data Center Providers that have significant resources to adapt their business model have been moving into the Cloud Provider space for some time.  For example, Equinix acquired Packet a few years ago and now Equinix offers “On Demand Metal”.

Smaller DC providers have not had the budget or engineering prowess to buy or create a cloud offering of their own.  As the on-demand open source private cloud technology matures, we expect and have already seen, significant interest from that sector. With the incredibly rich functionality supplied by OpenStack, a small DC provider could offer cloud products that compete with the mega clouds.

Public Cloud Providers using OpenStack Already

We are actively working on this use case as we feel enabling OpenStack Public Cloud Providers to more effectively compete with the mega-clouds is critical to the health of Open Source.

Hyper-converged and on-demand hardware plus fully private OpenStack opened a few doors for current OpenStack public cloud providers.  In the past, for a provider, offering an additional location meant running that location in the red for quite some time until the customer base at that location reached a certain scale.  Without significant resources, only a few locations were possible.

As OpenStack has great native functionality for Regions and for letting one OpenStack control all deployments, it is straightforward to add numerous small clouds in different geographic locations.  We specifically built small footprints that can scale up as demand requires. Thus the OpenStack Public Cloud provider can then offer many locations with a much smaller investment than before.

We have our own roadmap challenge to adding many locations as we must get critical mass, but our mission is to provide enough locations so any small OpenStack Public Cloud provider could match up with other large competitors and even the mega-clouds.

Next Steps

2021 is concentrated on a few areas.

  • Adding additional OpenStack-based functionality into the current hardware footprints.
  • Partnering with other Open Source friendly companies to offer best-of-breed tools for monitoring, disaster recovery, ML/AI automated operations, infrastructure automation, etc.
  • Market awareness that small on-demand OpenStack has arrived and smaller teams now have access to an Open Source alternative to the mega-clouds

If you are interested in what we are doing, please reach out to us or come explore On-Demand Private Cloud.

Tags:

Leave a Reply

Your email address will not be published. Required fields are marked *