Introducing On-Demand OpenStack Private Clouds and Initial Use Cases

InMotion Hosting has recently brought to market an automated deployment of OpenStack and Ceph that we sell as on-demand Private Cloud and as part of our Infrastructure as a Service.  We believe that making OpenStack more accessible is critical to the health of the OpenStack community as it will allow smaller teams a low-risk and cost-effective place to learn and run OpenStack.

As this is a new technology, it is necessary to provide some context and possible market position.  Based on the speed of delivery of the cloud, we selected “on demand” internally and have continued to use it publicly.

On-Demand Private Cloud Defined

Closed source On-Demand Private Clouds emerged in 2018 from traditional industry names building from their history in on-premise private clouds. VMWare and Nutanix, for example, accomplished this through partnerships with large public clouds. Open source on-demand private clouds emerged in late 2020 and are currently playing catchup to the closed source solution.

The features and functionality of a full OpenStack are typically considered to be ahead of the closed source solutions.  Critically though, the closed source solutions were significantly easier to setup and had a much more predictable likelihood of success.  This gave the closed source offering a significant advantage and thus a better fit for smaller companies.

Time to Utilization/Time to Production

As we considered overcoming this disadvantage, we applied the Agile philosophy around delivery time. Consider the “time to production” as measured by a recognized need for resources to the launch of a VM or Container to meet that need.  For our private clouds, time to utilization was commonly measured in quarters before 2016, then months in 2016-2017, then weeks in 2018-2019, then in 2020, it fell quickly from weeks to minutes.

Provision time is now between 30 and 45 minutes for a 3 server cluster depending on the type of hardware and install complexity.  NVMe is faster than SATA SSD. More included OpenStack Components are slower.  The next development cycles are not focused on faster delivery but on more included OpenStack Components in the same sub-1-hour window. Moving from 45 minutes to something sub 20 minutes is possible but it does not appear to be very important to customers at this time.

Usage Based Billing

In addition to fast “time to utilization”, usage-based billing is a key part of being on-demand.  With the spin-up being under an hour, it allows for usage by the hour. Though the usage-based billing is somewhat limited in value for long-term production environments, it is a required attribute for variable workloads, PoCs, testing, and training.

Small and Low Cost Building Blocks

Finally, small building blocks are a critical part of being on-demand. Currently, the smallest full private cloud  is composed of 3 hyper-converged servers each with a single CPU and a single SATA SSD storage OSD. This creates a HA Ceph storage-backed OpenStack cloud billed per hour with only a 1-hour minimum.

Summarizing the definition again is simply fast time to utilization, usage based billing, and small building blocks.  For the future, it also sets up a way to compare against public mega-clouds for ease of spinning up VMs or Containers, running the workload, turning them off, and paying for what you have used.

First Use Cases for On-Demand Private Cloud

The initial use case we have both placed or are actively selling falls into the following categories.  Please note these are initial and extrapolated findings based on a limited set of data.  Combined with our experience in the market, these are the following use cases we expect to have large traction in the next several years, but it is early in the data.

Production Private Clouds with Managed Support Level

Many companies are feeling the pressure to “move to the cloud” from both on-site resources and traditional hosting providers.  We found that many companies would prefer to work within a closed system that gives you access to cloud functionality without having to learn how to control costs at the service, VM, or other micro-levels.

This group has so far wanted to focus on using the cloud versus running and using the cloud.  We are adding managed services formally now to accommodate these customers.

The hyper-converged hardware has fit this use case well.  Of note, ML and AI customers are also very interested in hardware components like GPUs and we will be adding those to our catalog in the coming months.

On-Demand OpenStack Cloud for Training and R&D Purposes

This was a use case we expected.  In days past, creating a high-quality OpenStack required a group of skilled System Engineers. This includes specialists in hardware, networks, security, and Linux. It is a stretch for a medium business to have these skills on staff and unlikely a small business will have more than 1 of them.

Even with the skilled group, most will not have experience with OpenStack. In order to learn to run a private cloud, the IT team has to convince their company to finance a “Pilot Program” of the potential cloud. Before on-demand OpenStack, those clouds can cost hundreds of thousands of dollars in server and network gear, plus 3-12 months’ worth of time.  And with that, many, maybe even the majority, of the pilots never turn into a production cloud.

Many enterprise-focused companies, like Redhat, Canonical, and Accenture, successfully help enterprises bridge that gap economically.  For smaller IT teams, they simply couldn’t access the benefits and cost savings of private cloud.

With the advent of the trivial on-demand private cloud providers, the two most considerable issues have been overcome.  Now, these users can learn with on-demand OpenStack and regardless of where the deployment goes, they have cut significant time and risk out of using OpenStack.

Proof of Concept of workloads from the Public Cloud

With the cost of the mega-clouds being so high, it is natural this use case is and will be significant.  At this time, we are not sure if these users will prefer to have a managed private cloud or will take the steps to assign staff in their company to become Cloud Operators.  

Currently, we are actively pursuing the latter as this gives the company the lowest cost options.  We also see improvements in the ease of being a private cloud Operator with key advancements like Containerized Control Planes.  A company with a reasonably skilled systems team can take on the Cloud Operators duties, increase their own company prowess, and save money.

Data Center Providers adding On-Demand Private Cloud as an offering

Data Center providers used to make significant portions of their revenue from smaller customers that often would just purchase ¼ and ½ racks.  Much of that type of business has moved away from direct purchases in data centers to either Mega-Clouds or to bare metal hosting providers.  

To offset this pattern, very large Data Center Providers that have significant resources to adapt their business model have been moving into the Cloud Provider space for some time.  For example, Equinix acquired Packet a few years ago and now Equinix offers “On Demand Metal”.

Smaller DC providers have not had the budget or engineering prowess to buy or create a cloud offering of their own.  As the on-demand open source private cloud technology matures, we expect and have already seen, significant interest from that sector. With the incredibly rich functionality supplied by OpenStack, a small DC provider could offer cloud products that compete with the mega clouds.

Public Cloud Providers using OpenStack Already

We are actively working on this use case as we feel enabling OpenStack Public Cloud Providers to more effectively compete with the mega-clouds is critical to the health of Open Source.

Hyper-converged and on-demand hardware plus fully private OpenStack opened a few doors for current OpenStack public cloud providers.  In the past, for a provider, offering an additional location meant running that location in the red for quite some time until the customer base at that location reached a certain scale.  Without significant resources, only a few locations were possible.

As OpenStack has great native functionality for Regions and for letting one OpenStack control all deployments, it is straightforward to add numerous small clouds in different geographic locations.  We specifically built small footprints that can scale up as demand requires. Thus the OpenStack Public Cloud provider can then offer many locations with a much smaller investment than before.

We have our own roadmap challenge to adding many locations as we must get critical mass, but our mission is to provide enough locations so any small OpenStack Public Cloud provider could match up with other large competitors and even the mega-clouds.

Next Steps

2021 is concentrated on a few areas.

  • Adding additional OpenStack-based functionality into the current hardware footprints.
  • Partnering with other Open Source friendly companies to offer best-of-breed tools for monitoring, disaster recovery, ML/AI automated operations, infrastructure automation, etc.
  • Market awareness that small on-demand OpenStack has arrived and smaller teams now have access to an Open Source alternative to the mega-clouds

If you are interested in what we are doing, please reach out to us or come explore On-Demand Private Cloud.

Tags:

As Edge Applications Multiply, OpenInfra Community Delivers StarlingX 5.0, Offering Cloud Infrastructure Stack for 5G, IoT

Everything carriers and enterprises need to deploy edge clouds for 5G, IoT is in open source StarlingX 5.0, available today; new features enhance security, operability and automation.

AUSTIN, Texas—June 2, 2021—StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.

New features in StarlingX 5.0 include:

  • edgeworker nodes, which enable customized operating systems near and at the edge
  • security enhancements
  • improvements to orchestration and automation
  • upgrades to and containerization of integrated open source components

***Download StarlingX 5.0 at https://opendev.org/starlingx***

StarlingX is in production, driving top 5G deployments

T-Systems and Verizon rely on StarlingX for their edge and production vRAN use cases, and a growing set of organizations are also evaluating the project for production deployment.

“It’s been exciting to see the tremendous increase in StarlingX community activity, not only in terms of commercial adoptions and evaluations but also with respect to investments in the project by many different organizations and individual contributors,” said Paul Miller, chief technology officer, Wind River. 

“We’re also seeing now, as a result of early market adoption of this open source stack, the use of StarlingX in edge computing and industrial IoT solutions,” Miller continued. “With market research indicating that about 70% of compute will be moving towards the edge over the next five years or so, we foresee continued strong adoption and investment in StarlingX community activity. As an original contributor to the code base and a strong supporter of the project, we are encouraged to see business being driven by StarlingX. The kind of ecosystem development we’re seeing around StarlingX is exactly what you want to see in a thriving open source community.”

Key Features of StarlingX 5.0

To further support the low-latency and distributed cloud requirements of edge computing and industrial IoT use cases, the community prioritized these features in StarlingX 5.0:

  • Support for ‘edgeworker’ nodes, a new personality distinguished from ‘worker’ nodes. Edgeworker nodes are usually deployed close to an edge device, such as an I/O device, a camera, a servo motor or a sensor, to manage host-based enrollment. The ‘edgeworker’ personality is particularly suitable when users want a lightweight approach, deploying only a few agents on the nodes. With edgeworker nodes, users can enroll new operating systems and new server classes, which expands the possibility for new StarlingX use cases.
  • Support for Nvidia GPUs, enabling operators to do additional offload for those particular workloads that require GPU interacting, such as machine learning or other image-based processing. 
  • Support for FPGA image update orchestration. FPGA and acceleration are important features of edge systems. The new FPGA image update orchestration feature in StarlingX 5.0 improves operations, supporting automation across the distributed cluster. This gives users the option to deploy FPGA with orchestrations that are automated from end to end.
  • A PTP notification feature to further extend StarlingX’s support of Precision Time Protocol. Operators can now receive notifications about the PTP state and take action in case the system time is no longer in sync with the PTP clock source, which is critical for time-sensitive applications.
  • Vault integration for secret management, a security focused feature. Users want the ability to store and access secrets securely. These secrets can include credentials, encryption keys, API tokens and other data that should not be stored in plain text on a system. Vault, an open source project, provides the ability to encrypt and store secrets with access control via a range of authorization and access policy configurations. Vault’s features include dynamic secret generation, data encryption, leasing and renewal, revocation and audit/logging. The integration of Vault improves StarlingX’s security posture and encryption capabilities while maintaining manageability.

Other additions to the new version include:

  • Improvements to certification management to enhance automation
  • Containerized Ceph storage by using Rook
  • Support for Net-SNMP v3 for the fault management service
  • CephFS for cluster storage 
  • Container Image Signature Validation

Learn more about these and other features of StarlingX 5.0 in the community’s release notes.

OpenInfra Community Drives StarlingX Progress

The StarlingX project launched in 2018. Since then, there have been more than 10,000 commits from over 260 authors. Today’s 5.0 release added 1100 commits from 105 developers to those total numbers. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, Akraino and more.

After initial code for the project was contributed by Wind River and Intel to the Open Infrastructure Foundation, the active community of support for StarlingX has expanded to include 99Cloud, FiberHome, and China UnionPay, among others. China UnionPay, China Unicom, T-Systems and Verizon have become early adopters of the software.

Community Accolades for StarlingX 5.0   

“The StarlingX community is continuously making significant progress,” said Shuquan Huang, technical director, 99Cloud Inc. “We’re excited to see StarlingX 5.0 is available with a lot of enhancements and new features. StarlingX will be the key to meet the requirement of edge computing, and it’s time to use the latest StarlingX platform to build your edge cloud.”

“The StarlingX 5.0 release is a new stage for edge computing,” said Wang Hao, senior software engineer, FiberHome Enterprise. “There are some great features that have been introduced to the platform to enhance edge computing security and bring more flexibility to different types of nodes that will extend StarlingX’s application to more edge scenarios.  Fiberhome will continue to focus on edge computing techniques and work with the StarlingX community to bring more values to our users.”

Project Resources

About StarlingX

StarlingX is the open source edge computing and IoT cloud platform optimized for low latency and high performance applications. It provides a scalable and highly reliable edge infrastructure, tested and available as a complete stack. Applications include industrial IoT, telecom, video delivery and other ultra-low latency use cases. StarlingX ensures compatibility among diverse open source components and provides unique project components for fault management and service management, among others, to ensure high availability of user applications. StarlingX is the ready-for-deployment code base for edge implementations in scalable solutions. StarlingX is an Open Infrastructure Foundation project. www.starlingx.io 

###

Tags:

Xena vPTG Summaries

The OpenStack community had its third virtual Project Teams Gathering (PTG) in late April. Over 500 individuals and 49 teams across the globe met and collaborated at the vPTG. Since the event concluded, several of those teams have posted summaries of the discussions they have had and the decisions that were made during the PTG.

PTG Summaries

Cinder, Brian Rosmaita (PTL)

CloudKitty, Rafael Weingärtner

Cyborg, Xin-ran Wang (PTL)

Glance, Abhishek Kekane (PTL)

Interop,  Arkady Kanevsky

Kolla, Mark Goddard (PTL)

Kuryr, Michał Dulko

Manila, Goutham Pacha Ravi

Multi-arch SIG, Rico Lin

Neutron, Slawek Kaplonski (PTL)

Nova, Balazs Gibizer (PTL)

Openstack Ansible collection/modules, Sagi Shnaidman

OpenStack-Ansible, Dmitriy Rabotyagov (PTL)

OpenStack Technical Committee, Ghanshyam Mann (Technical Committee Member)

Quality Assurance, Martin Kopec (PTL)

Role-Based Access Control, Lance Bragstad

Scientific SIG

Telemetry, Matthias Runge (PTL)

TripleO, Marios Andreou (PTL)

Venus, Liye Pang(逄立业)

OpenInfra Live PTG Recap

Project leaders from OpenStack, Kata Containers, StarlingX, OpenStack Ironic, the Edge Computing Group, Scientific SIG, and Multi-Arch SIG provide recaps from discussions held at the PTG.

If you would like to get your post added to this list, please contact Helena Spease at [email protected]

Feedback from the PTG

Like the last vPTG, we provided an etherpad all throughout the event to collect feedback on how things went from registration to the last meeting. Please add any feedback you feel might be missing!

Tags:

Sardina Systems and Ambedded Technology announce global partnership to deliver scale-out enterprise Software-Defined Storage solution on Kubernetes

Sardina Systems, a leading OpenStack and Kubernetes platforms software vendor addressing the full lifecycle of clouds with pre-integrated operations tools, and Ambedded Technology, a software-defined storage company with expertise on Ceph storage and embedded arm platform, today announced they have partnered to deliver a highly efficient and modern Software Defined-Storage solution.

Companies looking for an agile, automated SDS Kubernetes based solution to replace the static and inefficient hardware can now embrace the innovative joint solution offered by Sardina Systems and Ambedded Technology partnership.

The result of the strategic partnership combines Sardinas award-winning FishOS solution – an OpenStack and Kubernetes cloud platform with zero-downtime operations, with Ambedded’s Unified Virtual Storage (UVS) Manager – a web-based graphical user interface (GUI) enabling administrators to simplify the way to manage, configure and monitor Ceph storage (SDS).

The Software Defined Storage solution is highly scalable and easy-to-operate, it is reducing the infrastructure capital investment and operational cost while achieving high availability, and it is helping to improve performance in both on-premise or hosted private cloud systems.

“We have teamed with Ambedded Technology to fulfill the enterprise customers’ needs for faster scalability, higher availability, greater flexibility, and efficiency allowing them to focus on application development and operations with a reduced operational cost. “

“What seemed to be impossible a few years ago, can now be just one click away for enterprises to benefit of Kubernetes and OpenStack clouds in a single platform offering customers a highly scalable, automated technology with operational tools and optimised infrastructure,” said Mihaela Constantinescu from Sardina Systems.

Thanks to both partners’ global coverage, the solution is available to the worldwide market enabling a scalable, flexible, and automated fully managed storage solution for today’s business and application demands. The applications are dynamically provisioned with the precise mix of capacity, performance, and security needed.

“We are delighted to announce our partnership with Sardina Systems. Together we can offer a wider range of turnkey solutions to our customers, from the infrastructure to the software-defined storage, which enables our enterprise customers to move to an efficient software-defined IT solution without facing the complicated integrations”, said Dominique SUN from Ambedded Technology.

The joint solution comes with broad benefits for customers, among which:

  • Easy to use, scale and manage storage
  • Lower initial investment and operational expenses, even at a massive scale
  • Automated operational tools for the entire lifecycle of OpenStack and Kubernetes cloud operations
  • Highly available and scalable platform
  • Exceptional performance
  • Affordable infrastructure cost

About Sardina Systems

Founded in 2014, Sardina Systems makes infrastructure invisible, elevating IT to focus on enterprise applications and services. FishOS natively converges server, storage, virtualization, and networking into a resilient, software-defined AI-based solution. Optimized performance, cloud flexibility, robust security, for all enterprise applications at any scale.

Sardina Systems has operations in Germany, Romania, Russia, Ukraine, and the UK.

About Ambedded Technology

Ambedded is a software-defined storage company with expertise in Linux OS, kernel, software-defined storage, embedded system, and Arm server.

With the purpose-built Ceph appliance, Ambedded team owns extensive experience to help customers to adopt Ceph solutions into versatile industries, such as Telecom, medical, military, edge datacenter, and HA required enterprise storage.

# # #

Tags:

OpenStack Project Update 2020

Despite being a very different year than most, the Open Infrastructure community, which has over 110,000 community members, made it a productive and successful year. One of the biggest milestones that happened in the global community last year was that OpenStack, one of the top three most active open source projects with 15 million cores in production, marked its 10th anniversary in 2020.

Ussuri

On May 13th, with the help of over 1,000 contributors spanning 50 countries, the OpenStack community released its 21st version, Ussuri. The focus areas of the release were reliability of the core infrastructure layer, enhanced security and encryption, and versatility for emerging use cases like edge and container environments. As a community, our goals were to make OpenStack’s codebase be python3-only (dropped Python 2.7 support ) and standardize our approaches to project specific contributor documentation.

Victoria

Later in 2020, the OpenStack community released Victoria, OpenStack’s 22nd version. Native Kubernetes integration was one of the primary focus points; Kuryr, for example, implemented support for custom resource definitions so that it won’t have to use annotations to store data about the OpenStack object in the k8s API. More generally, there were also features added to support more diverse architectures and standards, such as direct programming of FPGA’s and additional TLS support. Other community goals for the release were migrating upstream CI/CD testing to the new Ubuntu LTS Focal and switching the last of the legacy Zuul jobs that were automatically derived from their Jenkins job to native Zuul v3 jobs. 160 orgnanizations, 45 countries, and almost 800 community members worked together to make the VIctoria release a success! We thank every one of them for their hard work and dedication to OpenStack.

New SIGs

TaCT

The Testing and Collaboration Tools (TaCT) SIG was created to serve the role previously occupied by the OpenStack Infrastructure team and support OpenStack’s project-specific testing and collaboration tooling and services. The OpenStack Infrastructure team formerly existed to care for the continuous integration and collaboration infrastructure on which the OpenStack community relies. With the rise of the OpenDev Collaboratory, the majority of the Infrastructure team’s former systems, administration activities, and configuration management repositories were no longer occurring under the authority of OpenStack. The TaCT SIG maintains, in cooperation with the OpenDev project, the tooling, and infrastructure needed to support the development process and testing of the OpenStack project.

Large Scale

This group was formed to facilitate running OpenStack at a large scale, answer questions that OpenStack operators have as they need to scale-up and scale-out, and help address some of the limitations operators encounter in large OpenStack clusters. The work of the group is organized along the various stages in the scaling journey for someone growing an OpenStack deployment.

It focuses from the starting configuration stages and goes through monitoring, scaling up, scaling out and upgrading and maintaining. That path was successfully traveled by many before. The job of the SIG is to extract that knowledge and provide answers for those who come next.

Hardware Vendor

The goals of this SIG is to provide a place where vendors, and those interested in vendor-specific things like drivers and supporting libs, can work together and collaborate openly to enable OpenStack services to integrate and work well on all hardware platforms.The Hardware Vendor SIG is still forming and growing and it currently owns and manages the Python client for Dell’s DRAC. The SIG is currently welcoming for other vendors to host their projects too.

Technical Committee Changes

Merging of TC/UC

For a long time, the OpenStack community has had two committees helping to direct their efforts. While it was great to have two perspectives giving guidance, unfortunately this approach recently lead to some siloing within our community. In 2020, the Technical Committee and User Committee meld into one group, which also resulted in some changes of the election process for the TC. Starting August 1st of 2020, when Technical Committee elections are held, Active User Contributors are also included in the the electorate so that they have equal say in their representation. Having the Technical Committee as a united group removes the distinction between developers and operators even more and makes them all equal contributors. This means that operators can run for seats in the committee, where they only need to make contributions to be eligible, that can be reporting bugs, making code or documentation changes, etc. At the same time the developers in the community and on the Technical Committee are encouraged to take on active user contributor tasks and ensure equal representation of operators.

TC Size Change

Throughout 2020, at each of the technical elections, we reduced the size of the Technical Committee by two people down to our current size of 9 members. The size of the TC is a trade-off between getting enough community representation and keeping enough members engaged and active. Before this change, the size (13 members) was defined in 2013, as we moved from 5 directly-elected seats + all PTLs (which would have been 14 people) to a model that could better cope with our explosive growth. Since then, 13 had worked well to ensure that new members could come in at every cycle untill recently. To avoid burning people out, and keep infusions of new contributors being cycled into the committee, we decided to reduce the size. As a result, the committee has been joined by long term developers and operators like Dan Smith and Belmiro Moreira.

Project Changes

As a continuously evolving project OpenStack went through a few governance and process related changes over 2020 to ensure maintainable and efficient operation of the comunity and the project teams.

The concept of distributed project leadership was announced during the second half of the year to help the teams to share responsibilities among themselves better. If a project team opts in to this model that means they will not have a dedicated Project Team Lead (PTL). The necessary tasks to guide the project are taken on by liaisons; the three required roles are release, tact-sig and security liaison. There is no guideline if one or more people fill in these roles. There are also some additional recommended roles to take on to perform tasks such as preparing the team for events or ensure a smooth process to onboard new contributors to the team. The distributed leadership model doesn’t affect existing models, such as PTL with liaisons.

In order to make the Technical Committee more efficient the process of making updates to a project became faster. Changes, like adding a new repository to a project or adding a tag to its repositories, required a one week waiting period even if enough votes from the TC were added to the review faster. In the new process two votes from TC members in favor to the change is enough for the chair to approve the request without a waiting period. In case there is a disagreement once the change is merged it can be reverted which than goes through the ‘formal vote’ process to ensure that enough discussion happens before making a decision.

Tags

Supports Standalone

While OpenStack services work well together, there are users that might not want to run all of them and might prefer other technologies over some of the core components of the project. As a result, some services have been modified so that they can be operated independent from the rest of OpenStack (e.g. Cinder Storage with a Kubernetes cluster) without losing their core functionality. In order to easily identify which services are able to be run standalone without other OpenStack services they are marked with the ‘Supports Standalone’ tag.

Kubernetes Starterkit

Kubernetes has become the go-to container orchestration system to run containerized applications, most commonly on top a cloud platform. As one of these platforms OpenStack can supply multitenant isolation between different Kubernetes clusters. As OpenStack has a number of services to build infrastructure with, it can be challenging to decide the minimum set to use as a base for Kubernetes. The Kubernetes starter kit tag recommends a minimal set of OpenStack services to provide the necessary resources to Kubernetes and the workloads to operate.

The Open Infrastructure Foundation would like to extend a huge thanks to the global community for all of the work that went into 2020 and is continuing in 2021 to help people build and operate open infrastructure. Check out the OpenStack community’s achievements in 2020 from the OpenInfra Foundation Annual Report and join us to build the next decade of open infrastructure!

https://www.openstack.org/annual-reports/2020-openstack-foundation-annual-report

Let’s cheer on the new year and to the successful growth of the seeds planted over these past years.

10 Years of OpenStack – Genghang at China Railway

Happy 10 years of OpenStack! Millions of cores, 100,000 community members, 10 years of you.

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful. 

Here, we’re talking to Genghang from China Railway. He tells the community about how he got started with OpenStack and his favorite memory from the last 10 years of OpenStack

CONTINUE READING

10 Years of OpenStack – Thomas Goirand at Infomaniak

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful. 

Here, we’re talking to Thomas Goirand from Infomaniak. He tells the community about how he got started with OpenStack and his favorite memory from the last 10 years of OpenStack

CONTINUE READING

Wallaby vPTG Summaries

The OpenStack community had its second virtual Project Teams Gathering (PTG) following the Open Infrastructure Summit in October. Over 500 individuals and 46 teams (30+ OpenStack teams) across the globe, met and collaborated at the vPTG. Since the event concluded, several of those teams have posted summaries of the discussions they have had and the decisions that were made during the PTG.

Project Specific PTG Summaries

Cinder, Brian Rosmaita (PTL) 

Cyborg, Yumeng Bao (PTL) 

Glance, Abhishek Kekane (PTL)

Kolla, Mark Goddard (PTL)

Manila, Goutham Pacha Ravi (PTL)

Masakari, Radoslaw Pilisek (PTL)

Neutron, Slawek Kaplonski (PTL) 

OpenStack-Ansible, Dmitriy Rabotyagov (PTL)

Quality Assurance, Ghanshyam Mann (PTL)

OpenStack Technical Committee, Ghanshyam Mann (Technical Committee Member)

If you would like to get your post added to this list, please contact [email protected]

Feedback from the PTG

Like the last vPTG, we provided an etherpad all throughout the event to collect feedback on how things went from registration to the last meeting. Please add any feedback you feel might be missing!

10 Years of OpenStack – Ghanshyam Mann at NEC

Happy 10 years of OpenStack! Millions of cores, 100,000 community members, 10 years of you.

Storytelling is one of the most powerful means to influence, teach, and inspire the people around us. To celebrate OpenStack’s 10th anniversary, we are spotlighting stories from the individuals in various roles from the community who have helped to make OpenStack and the global Open Infrastructure community successful. 

Here, we’re talking to Ghanshyam Mann from NEC. He tells the community about how he got started with OpenStack and his favorite memory from the last 10 years of OpenStack

Continue reading