April 2022 vPTG Recap

That’s a wrap! Our most recent virtual Project Team Gathering (PTG) ran from April 4th to 8th. What is hopefully our last virtual PTG for a while ended on a high note with lots of productive conversations. The PTG hosted a variety of Open Infrastructure Foundation teams and projects from OpenStack and StarlingX to the Diversity and Inclusion Working Group and the Edge Computing Group. 

Below are summaries written by community members about the meetings various teams had. 

Don’t see your summary in the list? Let us know! We are happy to get it added! 

Stay tuned for updates on our next Project Team Gathering! Hopefully we will SEE you and your team there!

2021 OpenInfra Annual Report: Large Scale SIG

The OpenInfra Foundation annual report for 2021 is now live, including sections on open infrastructure growth, all OpenInfra projects, Foundation updates, SIG and WG updates, and more.

The Large Scale SIG is an OpenStack special interest group gathering operators of large scale OpenStack deployments interested in sharing their experience and discussing best practices. The group has been meeting regularly this year, with about 6 people forming the core group and a dozen of other people more infrequently involved.

One output of the group is the Scaling Journey: a set of wiki pages that describe the various stages of scaling your OpenStack deployments from tens of nodes or tens of thousands of nodes. That path was traveled by a lot of operators before, but lack of documentation and practical experience sharing still makes it a daunting prospect. The goal of the SIG is to document frequently asked questions and answers, as well as point to relevant resources, to make that journey as predictable and pleasant as possible.

Another focus of the Large Scale SIG this year was the “Large Scale OpenStack” show, a recurring event on the OpenInfra Live webcast. We invited operators of large scale deployments and got them to present how they solve a given operations challenge, and discuss live between themselves their different approaches. We tackled topics like upgrades in large scale environments, spare capacity handling, software-defined supercomputers, scaling Neutron, as well as operators tricks and tools. This show was amongst the most popular episodes on OpenInfra Live, and our goal is to continue in 2022.

Read the full 2021 OpenInfra Annual Report here!

Tags:

2021 OpenInfra Annual Report: OpenStack

OpenStack continues to be one of the most active open source communities with more than 25 million cores in production. This year, the OpenStack community continued to produce more software that is run in production. Notable highlights include:

  • Wallaby: Wallaby, the 23rd release of OpenStack, was developed by over 800 contributors from 140 organizations in 45 countries. Comprised of more than 17,000 code changes, Wallaby focused on integration with other open source projects like CEPH, Kubernetes, and Prometheus. OpenStack continues to be the third most active open source project (along with the Linux kernel and Chromium).
  • Xena: The Xena release brought with it better integration amongst OpenStack projects, support for advanced hardware features, and reduction of technical debt. Nova’s support for SmartNICs and ECMP routes in Neutron are a few of the hardware features now supported. Together, over 680 contributors worked in a short 25 weeks to present the 24th release of OpenStack.
  • Technical Writing SIG Dissolution: After successfully migrating the docs to project repositories several releases ago, and mostly maintaining an advisory role now, the Technical Writing SIG decided to dissolve and migrate the last of their repositories to other teams. At the PTG, the Technical Writing SIG Chair met with the Technical Committee and the First Contact SIG worked out a plan to retire the Technical Writing SIG.
  • Skyline: OpenStack welcomed its newest project since the addition of Adjutant in 2018! Skyline is an OpenStack dashboard built with React as an alternative to the Horizon dashboard. While not ready for production use just yet, the dashboard is engineered so that functions directly call OpenStack APIs to make maintaining the dashboard easier for developers and interactions faster and more efficient for users. This effort has been supported by the Horizon team as a future replacement once Skyline’s functionality gets closer to parity with Horizon. The Technical Committee accepted with with the caveat that Skyline with be labeled as a ‘tech preview’ until it undergoes the changes to bring it more in line with how other OpenStack services are organized and released.
  • TC Stance on OpenStackClient: The Technical Committee formalized their stance on the OpenStackClient in a resolution this year. Instead of making the development of OpenStackClient a community wide goal, but still wanting to show support and stand with the OpenStackClient team, they settled on a resolution with the intention of circling back to evaluate if the OpenStack community is ready for the OpenStackClient community goal.
  • IRC Network Migration: There was a change in ownership, organization structure, and policy in Freenode, the IRC network that the community has made use of for years. In response, the community discussed alternatives and settled on migrating over to a different network, but keeping IRC as the synchronous chat platform our community uses. As of late May, the OpenStack community has moved to the OFTC Network. Most teams were a part of this transition.

Get involved:

MORE OPENSTACK IN PRODUCTION THAN EVER

In November 2020, the OpenStack community celebrated 15 million cores in production. Just 12 months later, over 25 million cores were recorded in the annual OpenStack User Survey, marking 66% growth compared to 2020. This growth was seen among organizations of all sizes, including seven organizations who were recording over 1 million cores in production.

Walmart, LINE, Workday, China Mobile, and Verizon Media were among the founding members of the OpenStack Million Core Club. They were recognized during the OpenInfra Live: Keynotes, celebrating their incredible scale and innovation with OpenStack.

OPENSTACK CERTIFIED ADMINISTRATOR (COA) EXAM

The Certified OpenStack Administrator exam is the only professional certification offered by the OpenInfra Foundation.

In 2021,

  • COA exam vouchers were purchased: 109
  • 193 students passed

Among the exam takers, students were from 29 countries, including

  • United States
  • United Kingdom
  • Canada
  • Bolivia
  • China
  • Columbia
  • Cyprus
  • Egypt
  • France
  • Germany
  • Ghana
  • Guatemala
  • Hungary
  • India
  • Indonesia
  • Mexico
  • Netherlands
  • Peru
  • Poland
  • Russia
  • Saudi Arabia
  • Singapore
  • Spain
  • Switzerland
  • Thailand
  • Turkey
  • Uganda
  • Ukraine
  • Vietnam

OPENSTACK UPSTREAM INSTITUTE

The training was held twice throughout the year. The first training took place as part of the Open Source Day stand alone event held by the Anita B Foundation who also hosts the Grace Hopper Celebration. The ratio of mentors to students was very high (2:1) which yielded not only a productive day full of good discussions, but a highly specialized experience for the attendees that participated in the event. Upstream Institute was held in a one-day format where the mentors focused on sharing information about the OpenStack community including the tools and processes that contributors use on a daily basis.

The afternoon section of the training concentrated on hands-on experience where students worked on reproducing and fixing bugs in the OpenStack code base. Attendees who were able to stay for the afternoon learned how to push a code change upstream for review and several of them were able to go above and beyond. They were able to reproduce their bugs and also submit a fix to it. The training was very sucessful and we got great feedback and engagement from students.

The second training held in 2021 was also an online training that was held as part of the Open Source Day within the Grace Hopper Celebration in October. Similar to the first rendition, the training was held in one-day format with lectures in the morning and hands-on exercises in the afternoon. Again, the students worked on fixing real bugs and the more than 14 mentors made sure that attendees learned the mechanics of uploading a change for review, which is essential to be able to contribute code or documentation after the course. Four attendees pushed patches for bugs that were real fixes to OpenStack and as a result, they entered into a drawing that Grace Hopper was hosting to incentivize working on open source.

While the possibilities to hold the training were again, very limited in 2021, a group of mentors keep maintaining the course material to ensure it is up to date for every training occasion to provide the best experience to the students who joined. The content, being fully open source and available online, provides the possibility for individuals and organizations to go through it as a self-paced course or run it locally.

You can read the full 2021 OpenInfra Annual Report here!

Tags:

Cloud Sovereignty: a Fashionable Trend or Vital Need?

The number of newly founded NGOs announcing Sovereign Cloud development as their primary goal has vastly increased last year. The modern trend of sovereignty becomes the must-have of the season. What should you choose for your cloud data – to be trendy and protected or stay in the dark for your data safety? In fact, you have no choice and no right to risk. 

Valuable Data

What exactly is data sovereignty in the cloud environment? How can we meet these present-day requirements? Data sovereignty refers to the jurisdictional control or legal authority that can assert data because it is subjected to the country’s laws. 

The main cloud sovereignty goals are protecting sensitive, private data and ensuring it remains under its owner’s control. Most countries have jurisdiction on the matter, and it is evolving continuously and rapidly. There are mainly two basic requirements to cloud sovereignty varying from country to country. The cloud or/and a party controlling the cloud may have to be located within the country.

In certain situations, the requirements for cloud sovereignty are stringent. In China, for example, in many instances, it is obligatory for the cloud provider to be a Chinese company. 

Data residency, in its turn, is a privacy and business prerogative and refers to the physical data storage location. This term is essential for commercial and taxation purposes. In the majority of cases, data residency mirrors data sovereignty rules and laws within a country.

Control and Access

The one who owns information owns the world. We still might not fully realize how crucial the importance of data is. Defining the “new oil”, information from data gives enormous power to those who own it and opens vast opportunities for all the industries – from statistics to strategic business planning and decision making. Data influences a country’s politics, economics, defense and foreign affairs. Today the availability or lack of the data can prevent national conflicts or provoke a war. Nobody doubts that the data becomes a new weapon. That’s why it is so important to keep data controlled and sovereign to ensure it does not harm. Without laws that regulate adequate data sovereignty compliance, even your personal information – which is no less valuable than your money, could be easily abused. 

But what kind of data falls under sovereignty jurisdiction and should be kept in the country? Independently of the industry, the first type of gathered data to be protected is personal data. It includes everything from a person’s basic identity information – name, address, ID numbers. This kind of data is followed by web data such as location, IP address, cookie data and RFID tags, health and genetic data, biometric data, racial or ethnic data, political opinions and sexual orientation.

In our everyday life, we leave behind a great lot of information that grows into the Big Data phenomenon you indeed heard of. What might seem insignificant at first sight turns out to be a powerful tool at a closer look. So, the next time you’re using a public cloud, ask yourself: “Are you certain that your customers that are ordering food on your platform are happy about their private data (what they had for dinner last night) to end up in the hands of a foreign government?” This bright example shows the importance of simple everyday information if we multiply it by the number of people in the country or the region. If you keep going through a person’s daily life, you very quickly realize that just about anything and everything can be considered critical enough to stay in the country. 

Data Travel Restrictions

As many IT trends originate from the USA, data sovereignty is no exception. Many credits its high rising popularity to Snowden’s leak that exposed the US NSA PRISM spying program. The US government collected the data not only from US citizens but also from foreign nationals. Particularly, the US government has the authority to access the data stored within its territory regardless of data’s origins. Remember also Facebook’s Cambridge Analytica scandal, where users’ data was collected without their explicit consent. These situations emphasized the importance of data sovereignty. Governments worldwide have been focusing on this matter to protect the countries and their citizens against information leaks and possible consequences.

US

The US has no general consumer data privacy law at the federal level. It does, however, have many industry-specific federal protection laws – for example, the 1994 Driver’s Privacy Protection Act and the Video Privacy Protection Act. Laws also vary from state to state. California Consumer Privacy Act (CCPA) is one of the most prominent data privacy laws in the United States.

EU

The European Union GDPR (General Data Protection Regulation) is a great example of data sovereignty law. Enacted in 2016, it governs the data protection and privacy of EU citizens and regulates the transfer of data outside the borders of the EU and the European Economic Area. However, countries like Germany and France have strict laws of their own to protect their citizens’ data. 

Germany

Germany has implemented the new German Privacy Act (BDSG-new) that restricts data transfers to third countries. Companies that process the people’s personal information have to fulfill the German government’s data protection requirements, even if they are located outside the country’s borders.

Indonesia, Brunei, China, Vietnam, Russia

Laws related to data protection in these countries are probably amongst the strictest. They have stringent requirements that the data has to be stored on servers within the country. 

Argentina, Brazil, Colombia, Peru and Uruguay

Data localization laws in Argentina, Brazil, Colombia, Peru and Uruguay are pretty mild. Some restrictions apply to international data transfers. However, these restrictions act only in certain conditions. 

Instant Solution: Protected, Controlled and Sovereign Cloud

Within the last year or so various projects appeared to declare and serve the principles of Cloud Sovereignty, develop cloud data rules and restrictions on the country or region level. Some of them are supported by governments; others are initiatives of IT communities or business ideas that hope to become highly profitable in future. They promise data protection and cloud sovereignty. However, none of them can offer independent, controlled and secure cloud data management today, right now. 

The immediate solution for any cloud sovereignty issues that can be met by the business already exists. The right answer is on the surface. Private clouds can easily satisfy all possible requirements to data protection, geographical localization, control, access and security. By its very nature, a private cloud would be located within the country. It means complete cloud sovereignty for enterprises. The private cloud workload and data are under the country’s jurisdiction. Such a private cloud run on hardware physically placed within the state complies with all the laws and regulations; the data never cross the borders to leave the country.

If you wonder how to achieve data sovereignty in a cloud environment right away, Sardina Systems is here to help you. Our brainchild FishOS is an efficient cloud management software for enterprises that can run and serve the data inside your country. Thanks to our numerous partners, the necessary hardware and data center facilities can be provided and hosted in a short time. The diverse customer experience worldwide gave us the essential practical knowledge in delivering a cloud that fully meets the country’s data sovereignty requirements. 

What is your choice? To risk the business and wait when all the data protection rules are settled? Or run the business today, with the high level of data security and sovereignty in your private cloud environment?

The source: www.sardinasystems.com

OpenStack’s 24th Release, Xena, Wields Powerful Hardware Support, Project Integration to Strengthen Open Infrastructure for Cloud-Native Applications

The latest release occurs as the 2021 User Survey reveals significant growth in OpenStack deployments ranging from hundreds of cores to six million cores; Over 100 new OpenStack clouds have been built, growing the total number of cores under OpenStack management to more than 25,000,000 cores

The OpenStack community today released Xena, the 24th version of the most widely deployed open source cloud infrastructure software. Highlights of the Xena release include support for new hardware features, improved integration among components, and reduction of technical debt to maintain OpenStack’s stable and reliable core.

OpenStack is one of the most active open source projects in the world, supported by a vibrant and engaged community of developers globally. Over the span of just 25 weeks, almost 15,000 changes authored by over 680 contributors from over 125 different organizations were included in the Xena release.

This release comes at a time when the OpenStack project is deployed in production more widely than ever. Over 100 new OpenStack clouds have been built in the past 18 months, growing the total number of cores under OpenStack management to more than 25,000,000 cores. Organizations with deployments ranging from hundreds of cores to six million cores have logged significant growth according to the 2021 OpenStack User Survey. The User Survey report will be made available ahead of the OpenInfra Live: Keynotes, November 17-18, where several of these production users will be sharing the details of their growing OpenStack use cases.

Learn more about the 24th release of OpenStack during tomorrow’s OpenInfra Live episode, OpenStack Xena: Open Source Integration and Hardware Diversity.

Superuser Awards Winners: How Their OpenStack Deployments Continue to Grow, Evolve

Since the Paris Summit in 2014, the OpenInfra Foundation has hosted our annual Superuser Awards to recognize organizations that have used open infrastructure to meaningfully improve their business while contributing back to the community. These organizations have previously won the Superuser Awards and shared how their deployments have grown and evolved on a recent episode of OpenInfra Live.

CERN

CERN was actually the first winner of the Superuser Awards in 2014, an event that Belmiro Moreira, cloud architect at CERN, has good memories of.  Fast forward to today, Moreira shared a dashboard screenshot that was taken yesterday of CERN’s live monitoring of their infrastructure.


Today, they have 15 OpenStack projects distributed between different releases to accommodate the different use cases among their end users. While this configuration can be challenging, Moreira says it allows for flexibility for their overall infrastructure. In addition to OpenStack, the CERN infrastructure is supported by several open source projects including CentOS, Kubernetes, Ceph, Prometheus and a dozen more.

China Mobile 

Xiaoguang Zhang, cloud architect at China Mobile, and Zhiqiang Yu, chief open source liaison officer, provided an update and massive growth that the China Mobile infrastructure has had since their team won the Superuser Awards in 2016. Today, China Mobile has a network cloud, private cloud and public cloud based on OpenStack spanning eight regions. 

“In China Mobile’s OpenStack based infrastructure, we support 4G, 5G, edge computing and other IT services for internal use and also external vendors with our public cloud,” Zhang said. We have scaled to around 300,000 physical servers and 6 million compute cores in total.” 

Zhang provided an overview of the Virtualized Networks Function (VNF) business, OpenStack based infrastructure and hardware layer. 

“For each OpenStack instance, it has to manage 500 to 1,500 servers,” he said. “The VNFs (4G and 5G) are running on top of the virtualization.” 

With this scale, Yu emphasized that China Mobile, a Gold Member of the OpenInfra Foundation, is really a Superuser now. 

“We are really looking forward to the next 10 years of OpenStack and the next 10 years of Open Infrastructure,” Yu said. 

Ontario Institute of Cancer Research

Jared Baker, cloud architect at the Ontario Institute of Cancer Research (OICR) shared how OpenStack has continued to support the research initiatives since their Superuser Awards win in 2018. 

Baker detailed some of the current projects that are going on at OICR: 

  • Overture.bio, a collection of open-source, extendable solutions for big-data genomic science 
  • VirusSeq where they are sequencing 150,000 viral samples from Canadians who are testing positive for Covid 19
  • International Cancer Genome Consortium (ICGC) is uniformly analyzing specimens from 25,000 cancer patients
  • Expanding on ICGC, ARGO will analyze specimens from 100,000 cancer patients 

VEXXHOST

Even though VEXXHOST just won the Superuser Awards at the OpenInfra Summit Denver in 2019, Mohammed Naser, CEO of VEXXHOST, said their infrastructure has seen a lot of changes. This includes announcing an OpenStack upgrade solution, launching a new public cloud region in Amsterdam and running Wallaby, the most recent OpenStack release, for their private and public clouds.

Naser also emphasized that they are open source across the entire stack, showing a breakdown of their open source adoption from container runtimes and orchestration to monitoring and CI/CD. 

If you would like to participate in a future episode of OpenInfra Live, share your ideas!

Containers in OpenStack Clouds: How to Keep the Applications Safe and Healthy

Let’s get a closer look at two giant open-source technologies – OpenStack and Kubernetes, how they work together and accomplish each other bringing more benefits to service consumers.

Opinions about Kubernetes and Openstack can be gathered in two main but opposite directions. On one hand, those who believe that OpenStack and Kubernetes are complementary technologies that can altogether work in tandem, and those who consider Kubernetes to be the substitute of OpenStack and vice versa. While it’s true that the use of these tools in many cases overlap, it doesn’t necessarily mean that one can easily replace the other. They help solve similar issues but on different layers of the stack. Thus, their combination can deliver users more powerful automation and scalability than ever. However, an even better way to look at these open-source giants is to consider Kubernetes as an extension for OpenStack, an excellent tool for container orchestration in the OpenStack cloud environment.

Let’s get a closer look at how these open-source technologies can work together and accomplish each other. OpenStack is an open-source cloud platform; it helps businesses run and manage their cloud infrastructures. Kubernetes is also open-source technology, the most widely used container orchestrator to run and manage containers. A range of OpenStack components and software solutions help efficiently combine both technologies to perform the best results.

But what do we exactly get from extending OpenStack capabilities with containers orchestrated by Kubernetes? 

Smooth Integration

As an open infrastructure, OpenStack provides API-driven access to compute, storage, and networking systems. The platform flexibility makes it possible to deploy on the single system all the enterprise environments may need – bare metal, VMs and container resources. 

Kubernetes enables developers to focus on their primary goals – creating software, its maintenance and improvement. The workload driven K8 technology offers the right on time tools and interfaces compiling with the developed cloud infrastructure features.

Both sets of technologies can boast their widespread integration into enterprise-level infrastructures and compatibility with many other IT solutions. On the one side is OpenStack with its traditional, proven by years, VM-based technology, and on the other is Kubernetes, a highly agile and dynamic orchestration system. Combined, they create a perfect duo for the enterprise cloud environment and bring to each other exactly what they lack.

Kubernetes clusters consume compute, storage and networking resources from OpenStack through the APIs. Building an abstraction layer for such resources, OpenStack helps make the cloud systems reliable, expectable, and steady.

For example, there is a common request from Kubernetes users to add a bunch of standard services like object and block storage, smoothly integrated with their system of containers. OpenStack offers practical tools for such requirements by supporting the most significant storage and networking solutions for organizations. Running Kubernetes with OpenStack brings seamless functional integration of containers into the cloud environment.

There are several solutions for running Kubernetes and other application frameworks on top of OpenStack. Magnum, an Openstack project, is the easiest way to deliver multi-tenanted and self-serviced container frameworks. It provides a simple API to deploy fully managed clusters backed by the choice of several application platforms, including Kubernetes. 

Cornerstone of Security

Being a powerful orchestrator, Kubernetes by no means simplifies application management compared to applications on traditional servers or VMs. The cutting-edge technologies bring freedom to developing processes, automate iterations, save time and effort for coding. The relatively new technology uses the latest modern security practices. However, modern and cutting edge do not a priori mean the best decision for business needs. The best security practices proven by years of successful usage can be destroyed by one new potential vulnerability released together with component upgrading. A threat to the cluster security and the data stored on it can come from both the external network and the cluster itself. A misconfigured application or an overlooked vulnerability could allow an attacker to access the container and the host’s file system.

The basic rules like regular system audit and cluster policies renewal secure Kubernetes cluster protection and diminish possible problems. Among the essential cybersecurity measures for Kubernetes are the following:

  •  strict firewall rules and access restrictions from the external network; 
  •  quota and authentication management for resources and users; 
  •  limitation of containers privileges and privileges in containers;
  •  reliable sources of images to prevent risks of running an unsafe application;
  •  logs monitoring and regular cluster security audit to detect known vulnerabilities in automatic mode and eliminate them on time.

To install and set security once and forever is a desirable but impossible dream. In a constantly changing cloud environment, new weaknesses and threats occur. Organizations should consider that running Kubernetes on bare-metal leads to a higher risk of security escape threat. If the intruder happens to be directly on the host, as an operator, you have a massive problem since all the users sharing the same kernel are compromised. On the other side, OpenStack can provide a unified cloud platform for orchestrating VMs, containers, and hardware compute resources keeping the security of clusters and the whole system on a high level. To implement and manage such a platform, a company needs an experienced team of quite expensive professionals or the right software solution with proven effectiveness and security.

Faster App Development

Kubernetes gifts developers with a wizard stick for speedy and painless management of many containers with applications. OpenStack creates the atmosphere for such magic to happen without delay and mistakes. The growing request for on-demand and access-anytime services makes K8 and OpenStack perfect allies to meet these user’s needs.  The benefits vary from increased application portability to reduced development time and enhanced application stability. Running Kubernetes and OpenStack together, the needed piece of code can be instantly found, identified, and used. It saves hours of coding and searching. And intelligent laziness, as you know, is the best engine for progress.

Developers widely use Kubernetes because of its highly demanded functionalities that make the technology ideal for delivering applications:

  • As isolated workspaces, containers make it possible to deploy multiple completely different applications to a single bare metal or VM without any conflict between the applications.
  • Using pods in a Kubernetes cluster can run a single container or multiple containers if they need to work together. To create and keep inside the group of application components, a pod encapsulates an application composed of multiple co-located containers that are tightly connected and need to share resources.
  • Kubernetes open-source technology runs containerized workloads in production with the ease of maintenance, supported by constant practices from the community.

Today nobody doubts containerized applications benefits, but there are still debates about host platforms to serve them. According to many recent surveys, most developers choose Kubernetes as the Number 1 platform for container management. However, if you want to get more from Kubernetes and OpenStack technologies and to increase business productivity, development time saving and application portability, you should run the prominent tandem together.

Auto-Scalability

While traditional applications require larger hardware to scale (vertical scaling), cloud-based applications are able to operate with discrete hardware (horizontal scaling). To meet the requirements, OpenStack is designed to be horizontally scalable. You procure more servers and install identically configured services rather than switching to larger servers. OpenStack’s horizontal scaling is a great scenario where Kubernetes can add more flexibility.

One of Kubernetes most powerful features is autoscaling, the automated process that otherwise would require intensive human effort if done manually. At the moment, Kubernetes has three auto-scaling methods: scaling pods horizontally and vertically, and scaling clusters.

Auto-scaling is essential in a private cloud environment; without it whenever conditions change, you have to provision resources and then scale down. Kubernetes autoscaling helps optimize resources by automatically increasing cluster nodes and pods number if more resources are demanded and adjusting back to fewer nodes and pods to save them. 

Scaling your OpenStack private cloud with Kubernetes helps you enjoy additional features. The resources can be scaled both horizontally and vertically in an efficient way. 

Double the benefits with FishOS

Years of experience has enabled Sardina’s team to become experts in creating private cloud environments based on OpenStack. We also offer smooth Kubernetes integration with FishOS to provide the customers with container orchestration tools. Ensuring features such as automated rollouts and rollbacks, high availability, heterogeneous clusters, storage orchestration and self-healing, we fully integrate Kubernetes within the OpenStack environment to provide all their capabilities.

FishOS can run Kubernetes complying with the official certification tests on both VM and bare metal servers. But it’s one thing to get Kubernetes running; a completely different one is to keep the system scalable, reliable, efficient, and cost-effective. The key FishOS values lie in these critical aspects of operating in private clouds. FishOS enables operators in enterprises to easily provide multi-tenanted Kubernetes environments with proven security assurances and helps developers deliver applications faster and easier. Sardina’s customers can also gain from mature, tested and proven persistent block storage, software-defined storage, and software-defined networking.

It makes sense to run Kubernetes clusters within VMs in larger organizations with the separated operator and consumer divisions. This allows the organization to benefit from the strong security segregation of VMs, and the reliability and resilience afforded by them. OpenStack, for its part, is able to cater to Kubernetes also on bare-metal nodes.

Benefits from combining OpenStack and Kubernetes technologies are by far a more evident argument than just using them solo. No wonder the OpenStack Foundation is committed to guaranteeing that emerging technologies can be incorporated and utilized within OpenStack, and containers are an actual example of that commitment. OpenStack has leveraged its design and a large community to integrate container technologies at different levels. By using Kubernetes in an OpenStack cloud environment, an organization can double the benefits. Two open-source giants working together can show better results in security, resilience and scalability, allowing faster application development and delivery of infrastructure innovations.

Share your OpenStack Feedback in the Annual User Survey

The OpenStack User Survey is open!

This is your annual opportunity to provide anonymous feedback to the upstream community, so the developers can better understand OpenStack environments and software requirements. Your anonymized feedback is shared with the project teams so we can improve how the community and project provide value to you.

The survey will take less than 20 minutes. As a token of our appreciation, all participants will receive early access to User Survey findings. 

Complete your User Survey by Friday, August 20 at 11:59pm UTC to be included in this year’s round of analysis.

.

The OpenStack community IRC network moved to OFTC

The OpenStack community uses IRC as one of the communication channels for development activities. IRC can run on many different networks, and we have used Freenode as the IRC network since the beginning.

Recently, there was a change in ownership, organization structure, and policy on Freenode. A lot of discussions occurred on social media, this etherpad and in openstack-discuss ML about the changes Freenode was going through, questioning whether we should continue using it. Given the feedback from the community and the current situation with Freenode, the OpenStack Technical Committee, with input from other community leaders, we have decided to change the IRC network from Freenode to OFTC. You can see the OpenStack Technical Committee resolution about this decision. 

This migration was done on May 31, 2021. A huge thanks to the OpenDev team, especially Jeremy Stanley (fungi), for this smooth migration. All the current IRC channels used by the OpenStack community are registered in the OFTC network with the same name.

This how to Join OFTC OpenStack channels page can help you to join us on OFTC.

All OpenStack discussions and meetings are happening on OFTC channels now. To track our migration progress, we are using this etherpad as  ‘‘Communicate with community” tasks. If any of your co-workers, friends, and people around you, are not aware of this change, please communicate the migration to them.For any further questions, feel free to contact us on #openstack-dev IRC OFTC channel or openstack-discuss email.

Introducing On-Demand OpenStack Private Clouds and Initial Use Cases

InMotion Hosting has recently brought to market an automated deployment of OpenStack and Ceph that we sell as on-demand Private Cloud and as part of our Infrastructure as a Service.  We believe that making OpenStack more accessible is critical to the health of the OpenStack community as it will allow smaller teams a low-risk and cost-effective place to learn and run OpenStack.

As this is a new technology, it is necessary to provide some context and possible market position.  Based on the speed of delivery of the cloud, we selected “on demand” internally and have continued to use it publicly.

On-Demand Private Cloud Defined

Closed source On-Demand Private Clouds emerged in 2018 from traditional industry names building from their history in on-premise private clouds. VMWare and Nutanix, for example, accomplished this through partnerships with large public clouds. Open source on-demand private clouds emerged in late 2020 and are currently playing catchup to the closed source solution.

The features and functionality of a full OpenStack are typically considered to be ahead of the closed source solutions.  Critically though, the closed source solutions were significantly easier to setup and had a much more predictable likelihood of success.  This gave the closed source offering a significant advantage and thus a better fit for smaller companies.

Time to Utilization/Time to Production

As we considered overcoming this disadvantage, we applied the Agile philosophy around delivery time. Consider the “time to production” as measured by a recognized need for resources to the launch of a VM or Container to meet that need.  For our private clouds, time to utilization was commonly measured in quarters before 2016, then months in 2016-2017, then weeks in 2018-2019, then in 2020, it fell quickly from weeks to minutes.

Provision time is now between 30 and 45 minutes for a 3 server cluster depending on the type of hardware and install complexity.  NVMe is faster than SATA SSD. More included OpenStack Components are slower.  The next development cycles are not focused on faster delivery but on more included OpenStack Components in the same sub-1-hour window. Moving from 45 minutes to something sub 20 minutes is possible but it does not appear to be very important to customers at this time.

Usage Based Billing

In addition to fast “time to utilization”, usage-based billing is a key part of being on-demand.  With the spin-up being under an hour, it allows for usage by the hour. Though the usage-based billing is somewhat limited in value for long-term production environments, it is a required attribute for variable workloads, PoCs, testing, and training.

Small and Low Cost Building Blocks

Finally, small building blocks are a critical part of being on-demand. Currently, the smallest full private cloud  is composed of 3 hyper-converged servers each with a single CPU and a single SATA SSD storage OSD. This creates a HA Ceph storage-backed OpenStack cloud billed per hour with only a 1-hour minimum.

Summarizing the definition again is simply fast time to utilization, usage based billing, and small building blocks.  For the future, it also sets up a way to compare against public mega-clouds for ease of spinning up VMs or Containers, running the workload, turning them off, and paying for what you have used.

First Use Cases for On-Demand Private Cloud

The initial use case we have both placed or are actively selling falls into the following categories.  Please note these are initial and extrapolated findings based on a limited set of data.  Combined with our experience in the market, these are the following use cases we expect to have large traction in the next several years, but it is early in the data.

Production Private Clouds with Managed Support Level

Many companies are feeling the pressure to “move to the cloud” from both on-site resources and traditional hosting providers.  We found that many companies would prefer to work within a closed system that gives you access to cloud functionality without having to learn how to control costs at the service, VM, or other micro-levels.

This group has so far wanted to focus on using the cloud versus running and using the cloud.  We are adding managed services formally now to accommodate these customers.

The hyper-converged hardware has fit this use case well.  Of note, ML and AI customers are also very interested in hardware components like GPUs and we will be adding those to our catalog in the coming months.

On-Demand OpenStack Cloud for Training and R&D Purposes

This was a use case we expected.  In days past, creating a high-quality OpenStack required a group of skilled System Engineers. This includes specialists in hardware, networks, security, and Linux. It is a stretch for a medium business to have these skills on staff and unlikely a small business will have more than 1 of them.

Even with the skilled group, most will not have experience with OpenStack. In order to learn to run a private cloud, the IT team has to convince their company to finance a “Pilot Program” of the potential cloud. Before on-demand OpenStack, those clouds can cost hundreds of thousands of dollars in server and network gear, plus 3-12 months’ worth of time.  And with that, many, maybe even the majority, of the pilots never turn into a production cloud.

Many enterprise-focused companies, like Redhat, Canonical, and Accenture, successfully help enterprises bridge that gap economically.  For smaller IT teams, they simply couldn’t access the benefits and cost savings of private cloud.

With the advent of the trivial on-demand private cloud providers, the two most considerable issues have been overcome.  Now, these users can learn with on-demand OpenStack and regardless of where the deployment goes, they have cut significant time and risk out of using OpenStack.

Proof of Concept of workloads from the Public Cloud

With the cost of the mega-clouds being so high, it is natural this use case is and will be significant.  At this time, we are not sure if these users will prefer to have a managed private cloud or will take the steps to assign staff in their company to become Cloud Operators.  

Currently, we are actively pursuing the latter as this gives the company the lowest cost options.  We also see improvements in the ease of being a private cloud Operator with key advancements like Containerized Control Planes.  A company with a reasonably skilled systems team can take on the Cloud Operators duties, increase their own company prowess, and save money.

Data Center Providers adding On-Demand Private Cloud as an offering

Data Center providers used to make significant portions of their revenue from smaller customers that often would just purchase ¼ and ½ racks.  Much of that type of business has moved away from direct purchases in data centers to either Mega-Clouds or to bare metal hosting providers.  

To offset this pattern, very large Data Center Providers that have significant resources to adapt their business model have been moving into the Cloud Provider space for some time.  For example, Equinix acquired Packet a few years ago and now Equinix offers “On Demand Metal”.

Smaller DC providers have not had the budget or engineering prowess to buy or create a cloud offering of their own.  As the on-demand open source private cloud technology matures, we expect and have already seen, significant interest from that sector. With the incredibly rich functionality supplied by OpenStack, a small DC provider could offer cloud products that compete with the mega clouds.

Public Cloud Providers using OpenStack Already

We are actively working on this use case as we feel enabling OpenStack Public Cloud Providers to more effectively compete with the mega-clouds is critical to the health of Open Source.

Hyper-converged and on-demand hardware plus fully private OpenStack opened a few doors for current OpenStack public cloud providers.  In the past, for a provider, offering an additional location meant running that location in the red for quite some time until the customer base at that location reached a certain scale.  Without significant resources, only a few locations were possible.

As OpenStack has great native functionality for Regions and for letting one OpenStack control all deployments, it is straightforward to add numerous small clouds in different geographic locations.  We specifically built small footprints that can scale up as demand requires. Thus the OpenStack Public Cloud provider can then offer many locations with a much smaller investment than before.

We have our own roadmap challenge to adding many locations as we must get critical mass, but our mission is to provide enough locations so any small OpenStack Public Cloud provider could match up with other large competitors and even the mega-clouds.

Next Steps

2021 is concentrated on a few areas.

  • Adding additional OpenStack-based functionality into the current hardware footprints.
  • Partnering with other Open Source friendly companies to offer best-of-breed tools for monitoring, disaster recovery, ML/AI automated operations, infrastructure automation, etc.
  • Market awareness that small on-demand OpenStack has arrived and smaller teams now have access to an Open Source alternative to the mega-clouds

If you are interested in what we are doing, please reach out to us or come explore On-Demand Private Cloud.

Tags: