OpenStack’s 24th Release, Xena, Wields Powerful Hardware Support, Project Integration to Strengthen Open Infrastructure for Cloud-Native Applications

The latest release occurs as the 2021 User Survey reveals significant growth in OpenStack deployments ranging from hundreds of cores to six million cores; Over 100 new OpenStack clouds have been built, growing the total number of cores under OpenStack management to more than 25,000,000 cores

The OpenStack community today released Xena, the 24th version of the most widely deployed open source cloud infrastructure software. Highlights of the Xena release include support for new hardware features, improved integration among components, and reduction of technical debt to maintain OpenStack’s stable and reliable core.

OpenStack is one of the most active open source projects in the world, supported by a vibrant and engaged community of developers globally. Over the span of just 25 weeks, almost 15,000 changes authored by over 680 contributors from over 125 different organizations were included in the Xena release.

This release comes at a time when the OpenStack project is deployed in production more widely than ever. Over 100 new OpenStack clouds have been built in the past 18 months, growing the total number of cores under OpenStack management to more than 25,000,000 cores. Organizations with deployments ranging from hundreds of cores to six million cores have logged significant growth according to the 2021 OpenStack User Survey. The User Survey report will be made available ahead of the OpenInfra Live: Keynotes, November 17-18, where several of these production users will be sharing the details of their growing OpenStack use cases.

Learn more about the 24th release of OpenStack during tomorrow’s OpenInfra Live episode, OpenStack Xena: Open Source Integration and Hardware Diversity.

Superuser Awards Winners: How Their OpenStack Deployments Continue to Grow, Evolve

Since the Paris Summit in 2014, the OpenInfra Foundation has hosted our annual Superuser Awards to recognize organizations that have used open infrastructure to meaningfully improve their business while contributing back to the community. These organizations have previously won the Superuser Awards and shared how their deployments have grown and evolved on a recent episode of OpenInfra Live.

CERN

CERN was actually the first winner of the Superuser Awards in 2014, an event that Belmiro Moreira, cloud architect at CERN, has good memories of.  Fast forward to today, Moreira shared a dashboard screenshot that was taken yesterday of CERN’s live monitoring of their infrastructure.


Today, they have 15 OpenStack projects distributed between different releases to accommodate the different use cases among their end users. While this configuration can be challenging, Moreira says it allows for flexibility for their overall infrastructure. In addition to OpenStack, the CERN infrastructure is supported by several open source projects including CentOS, Kubernetes, Ceph, Prometheus and a dozen more.

China Mobile 

Xiaoguang Zhang, cloud architect at China Mobile, and Zhiqiang Yu, chief open source liaison officer, provided an update and massive growth that the China Mobile infrastructure has had since their team won the Superuser Awards in 2016. Today, China Mobile has a network cloud, private cloud and public cloud based on OpenStack spanning eight regions. 

“In China Mobile’s OpenStack based infrastructure, we support 4G, 5G, edge computing and other IT services for internal use and also external vendors with our public cloud,” Zhang said. We have scaled to around 300,000 physical servers and 6 million compute cores in total.” 

Zhang provided an overview of the Virtualized Networks Function (VNF) business, OpenStack based infrastructure and hardware layer. 

“For each OpenStack instance, it has to manage 500 to 1,500 servers,” he said. “The VNFs (4G and 5G) are running on top of the virtualization.” 

With this scale, Yu emphasized that China Mobile, a Gold Member of the OpenInfra Foundation, is really a Superuser now. 

“We are really looking forward to the next 10 years of OpenStack and the next 10 years of Open Infrastructure,” Yu said. 

Ontario Institute of Cancer Research

Jared Baker, cloud architect at the Ontario Institute of Cancer Research (OICR) shared how OpenStack has continued to support the research initiatives since their Superuser Awards win in 2018. 

Baker detailed some of the current projects that are going on at OICR: 

  • Overture.bio, a collection of open-source, extendable solutions for big-data genomic science 
  • VirusSeq where they are sequencing 150,000 viral samples from Canadians who are testing positive for Covid 19
  • International Cancer Genome Consortium (ICGC) is uniformly analyzing specimens from 25,000 cancer patients
  • Expanding on ICGC, ARGO will analyze specimens from 100,000 cancer patients 

VEXXHOST

Even though VEXXHOST just won the Superuser Awards at the OpenInfra Summit Denver in 2019, Mohammed Naser, CEO of VEXXHOST, said their infrastructure has seen a lot of changes. This includes announcing an OpenStack upgrade solution, launching a new public cloud region in Amsterdam and running Wallaby, the most recent OpenStack release, for their private and public clouds.

Naser also emphasized that they are open source across the entire stack, showing a breakdown of their open source adoption from container runtimes and orchestration to monitoring and CI/CD. 

If you would like to participate in a future episode of OpenInfra Live, share your ideas!

Containers in OpenStack Clouds: How to Keep the Applications Safe and Healthy

Let’s get a closer look at two giant open-source technologies – OpenStack and Kubernetes, how they work together and accomplish each other bringing more benefits to service consumers.

Opinions about Kubernetes and Openstack can be gathered in two main but opposite directions. On one hand, those who believe that OpenStack and Kubernetes are complementary technologies that can altogether work in tandem, and those who consider Kubernetes to be the substitute of OpenStack and vice versa. While it’s true that the use of these tools in many cases overlap, it doesn’t necessarily mean that one can easily replace the other. They help solve similar issues but on different layers of the stack. Thus, their combination can deliver users more powerful automation and scalability than ever. However, an even better way to look at these open-source giants is to consider Kubernetes as an extension for OpenStack, an excellent tool for container orchestration in the OpenStack cloud environment.

Let’s get a closer look at how these open-source technologies can work together and accomplish each other. OpenStack is an open-source cloud platform; it helps businesses run and manage their cloud infrastructures. Kubernetes is also open-source technology, the most widely used container orchestrator to run and manage containers. A range of OpenStack components and software solutions help efficiently combine both technologies to perform the best results.

But what do we exactly get from extending OpenStack capabilities with containers orchestrated by Kubernetes? 

Smooth Integration

As an open infrastructure, OpenStack provides API-driven access to compute, storage, and networking systems. The platform flexibility makes it possible to deploy on the single system all the enterprise environments may need – bare metal, VMs and container resources. 

Kubernetes enables developers to focus on their primary goals – creating software, its maintenance and improvement. The workload driven K8 technology offers the right on time tools and interfaces compiling with the developed cloud infrastructure features.

Both sets of technologies can boast their widespread integration into enterprise-level infrastructures and compatibility with many other IT solutions. On the one side is OpenStack with its traditional, proven by years, VM-based technology, and on the other is Kubernetes, a highly agile and dynamic orchestration system. Combined, they create a perfect duo for the enterprise cloud environment and bring to each other exactly what they lack.

Kubernetes clusters consume compute, storage and networking resources from OpenStack through the APIs. Building an abstraction layer for such resources, OpenStack helps make the cloud systems reliable, expectable, and steady.

For example, there is a common request from Kubernetes users to add a bunch of standard services like object and block storage, smoothly integrated with their system of containers. OpenStack offers practical tools for such requirements by supporting the most significant storage and networking solutions for organizations. Running Kubernetes with OpenStack brings seamless functional integration of containers into the cloud environment.

There are several solutions for running Kubernetes and other application frameworks on top of OpenStack. Magnum, an Openstack project, is the easiest way to deliver multi-tenanted and self-serviced container frameworks. It provides a simple API to deploy fully managed clusters backed by the choice of several application platforms, including Kubernetes. 

Cornerstone of Security

Being a powerful orchestrator, Kubernetes by no means simplifies application management compared to applications on traditional servers or VMs. The cutting-edge technologies bring freedom to developing processes, automate iterations, save time and effort for coding. The relatively new technology uses the latest modern security practices. However, modern and cutting edge do not a priori mean the best decision for business needs. The best security practices proven by years of successful usage can be destroyed by one new potential vulnerability released together with component upgrading. A threat to the cluster security and the data stored on it can come from both the external network and the cluster itself. A misconfigured application or an overlooked vulnerability could allow an attacker to access the container and the host’s file system.

The basic rules like regular system audit and cluster policies renewal secure Kubernetes cluster protection and diminish possible problems. Among the essential cybersecurity measures for Kubernetes are the following:

  •  strict firewall rules and access restrictions from the external network; 
  •  quota and authentication management for resources and users; 
  •  limitation of containers privileges and privileges in containers;
  •  reliable sources of images to prevent risks of running an unsafe application;
  •  logs monitoring and regular cluster security audit to detect known vulnerabilities in automatic mode and eliminate them on time.

To install and set security once and forever is a desirable but impossible dream. In a constantly changing cloud environment, new weaknesses and threats occur. Organizations should consider that running Kubernetes on bare-metal leads to a higher risk of security escape threat. If the intruder happens to be directly on the host, as an operator, you have a massive problem since all the users sharing the same kernel are compromised. On the other side, OpenStack can provide a unified cloud platform for orchestrating VMs, containers, and hardware compute resources keeping the security of clusters and the whole system on a high level. To implement and manage such a platform, a company needs an experienced team of quite expensive professionals or the right software solution with proven effectiveness and security.

Faster App Development

Kubernetes gifts developers with a wizard stick for speedy and painless management of many containers with applications. OpenStack creates the atmosphere for such magic to happen without delay and mistakes. The growing request for on-demand and access-anytime services makes K8 and OpenStack perfect allies to meet these user’s needs.  The benefits vary from increased application portability to reduced development time and enhanced application stability. Running Kubernetes and OpenStack together, the needed piece of code can be instantly found, identified, and used. It saves hours of coding and searching. And intelligent laziness, as you know, is the best engine for progress.

Developers widely use Kubernetes because of its highly demanded functionalities that make the technology ideal for delivering applications:

  • As isolated workspaces, containers make it possible to deploy multiple completely different applications to a single bare metal or VM without any conflict between the applications.
  • Using pods in a Kubernetes cluster can run a single container or multiple containers if they need to work together. To create and keep inside the group of application components, a pod encapsulates an application composed of multiple co-located containers that are tightly connected and need to share resources.
  • Kubernetes open-source technology runs containerized workloads in production with the ease of maintenance, supported by constant practices from the community.

Today nobody doubts containerized applications benefits, but there are still debates about host platforms to serve them. According to many recent surveys, most developers choose Kubernetes as the Number 1 platform for container management. However, if you want to get more from Kubernetes and OpenStack technologies and to increase business productivity, development time saving and application portability, you should run the prominent tandem together.

Auto-Scalability

While traditional applications require larger hardware to scale (vertical scaling), cloud-based applications are able to operate with discrete hardware (horizontal scaling). To meet the requirements, OpenStack is designed to be horizontally scalable. You procure more servers and install identically configured services rather than switching to larger servers. OpenStack’s horizontal scaling is a great scenario where Kubernetes can add more flexibility.

One of Kubernetes most powerful features is autoscaling, the automated process that otherwise would require intensive human effort if done manually. At the moment, Kubernetes has three auto-scaling methods: scaling pods horizontally and vertically, and scaling clusters.

Auto-scaling is essential in a private cloud environment; without it whenever conditions change, you have to provision resources and then scale down. Kubernetes autoscaling helps optimize resources by automatically increasing cluster nodes and pods number if more resources are demanded and adjusting back to fewer nodes and pods to save them. 

Scaling your OpenStack private cloud with Kubernetes helps you enjoy additional features. The resources can be scaled both horizontally and vertically in an efficient way. 

Double the benefits with FishOS

Years of experience has enabled Sardina’s team to become experts in creating private cloud environments based on OpenStack. We also offer smooth Kubernetes integration with FishOS to provide the customers with container orchestration tools. Ensuring features such as automated rollouts and rollbacks, high availability, heterogeneous clusters, storage orchestration and self-healing, we fully integrate Kubernetes within the OpenStack environment to provide all their capabilities.

FishOS can run Kubernetes complying with the official certification tests on both VM and bare metal servers. But it’s one thing to get Kubernetes running; a completely different one is to keep the system scalable, reliable, efficient, and cost-effective. The key FishOS values lie in these critical aspects of operating in private clouds. FishOS enables operators in enterprises to easily provide multi-tenanted Kubernetes environments with proven security assurances and helps developers deliver applications faster and easier. Sardina’s customers can also gain from mature, tested and proven persistent block storage, software-defined storage, and software-defined networking.

It makes sense to run Kubernetes clusters within VMs in larger organizations with the separated operator and consumer divisions. This allows the organization to benefit from the strong security segregation of VMs, and the reliability and resilience afforded by them. OpenStack, for its part, is able to cater to Kubernetes also on bare-metal nodes.

Benefits from combining OpenStack and Kubernetes technologies are by far a more evident argument than just using them solo. No wonder the OpenStack Foundation is committed to guaranteeing that emerging technologies can be incorporated and utilized within OpenStack, and containers are an actual example of that commitment. OpenStack has leveraged its design and a large community to integrate container technologies at different levels. By using Kubernetes in an OpenStack cloud environment, an organization can double the benefits. Two open-source giants working together can show better results in security, resilience and scalability, allowing faster application development and delivery of infrastructure innovations.

Share your OpenStack Feedback in the Annual User Survey

The OpenStack User Survey is open!

This is your annual opportunity to provide anonymous feedback to the upstream community, so the developers can better understand OpenStack environments and software requirements. Your anonymized feedback is shared with the project teams so we can improve how the community and project provide value to you.

The survey will take less than 20 minutes. As a token of our appreciation, all participants will receive early access to User Survey findings. 

Complete your User Survey by Friday, August 20 at 11:59pm UTC to be included in this year’s round of analysis.

.

The OpenStack community IRC network moved to OFTC

The OpenStack community uses IRC as one of the communication channels for development activities. IRC can run on many different networks, and we have used Freenode as the IRC network since the beginning.

Recently, there was a change in ownership, organization structure, and policy on Freenode. A lot of discussions occurred on social media, this etherpad and in openstack-discuss ML about the changes Freenode was going through, questioning whether we should continue using it. Given the feedback from the community and the current situation with Freenode, the OpenStack Technical Committee, with input from other community leaders, we have decided to change the IRC network from Freenode to OFTC. You can see the OpenStack Technical Committee resolution about this decision. 

This migration was done on May 31, 2021. A huge thanks to the OpenDev team, especially Jeremy Stanley (fungi), for this smooth migration. All the current IRC channels used by the OpenStack community are registered in the OFTC network with the same name.

This how to Join OFTC OpenStack channels page can help you to join us on OFTC.

All OpenStack discussions and meetings are happening on OFTC channels now. To track our migration progress, we are using this etherpad as  ‘‘Communicate with community” tasks. If any of your co-workers, friends, and people around you, are not aware of this change, please communicate the migration to them.For any further questions, feel free to contact us on #openstack-dev IRC OFTC channel or openstack-discuss email.

Introducing On-Demand OpenStack Private Clouds and Initial Use Cases

InMotion Hosting has recently brought to market an automated deployment of OpenStack and Ceph that we sell as on-demand Private Cloud and as part of our Infrastructure as a Service.  We believe that making OpenStack more accessible is critical to the health of the OpenStack community as it will allow smaller teams a low-risk and cost-effective place to learn and run OpenStack.

As this is a new technology, it is necessary to provide some context and possible market position.  Based on the speed of delivery of the cloud, we selected “on demand” internally and have continued to use it publicly.

On-Demand Private Cloud Defined

Closed source On-Demand Private Clouds emerged in 2018 from traditional industry names building from their history in on-premise private clouds. VMWare and Nutanix, for example, accomplished this through partnerships with large public clouds. Open source on-demand private clouds emerged in late 2020 and are currently playing catchup to the closed source solution.

The features and functionality of a full OpenStack are typically considered to be ahead of the closed source solutions.  Critically though, the closed source solutions were significantly easier to setup and had a much more predictable likelihood of success.  This gave the closed source offering a significant advantage and thus a better fit for smaller companies.

Time to Utilization/Time to Production

As we considered overcoming this disadvantage, we applied the Agile philosophy around delivery time. Consider the “time to production” as measured by a recognized need for resources to the launch of a VM or Container to meet that need.  For our private clouds, time to utilization was commonly measured in quarters before 2016, then months in 2016-2017, then weeks in 2018-2019, then in 2020, it fell quickly from weeks to minutes.

Provision time is now between 30 and 45 minutes for a 3 server cluster depending on the type of hardware and install complexity.  NVMe is faster than SATA SSD. More included OpenStack Components are slower.  The next development cycles are not focused on faster delivery but on more included OpenStack Components in the same sub-1-hour window. Moving from 45 minutes to something sub 20 minutes is possible but it does not appear to be very important to customers at this time.

Usage Based Billing

In addition to fast “time to utilization”, usage-based billing is a key part of being on-demand.  With the spin-up being under an hour, it allows for usage by the hour. Though the usage-based billing is somewhat limited in value for long-term production environments, it is a required attribute for variable workloads, PoCs, testing, and training.

Small and Low Cost Building Blocks

Finally, small building blocks are a critical part of being on-demand. Currently, the smallest full private cloud  is composed of 3 hyper-converged servers each with a single CPU and a single SATA SSD storage OSD. This creates a HA Ceph storage-backed OpenStack cloud billed per hour with only a 1-hour minimum.

Summarizing the definition again is simply fast time to utilization, usage based billing, and small building blocks.  For the future, it also sets up a way to compare against public mega-clouds for ease of spinning up VMs or Containers, running the workload, turning them off, and paying for what you have used.

First Use Cases for On-Demand Private Cloud

The initial use case we have both placed or are actively selling falls into the following categories.  Please note these are initial and extrapolated findings based on a limited set of data.  Combined with our experience in the market, these are the following use cases we expect to have large traction in the next several years, but it is early in the data.

Production Private Clouds with Managed Support Level

Many companies are feeling the pressure to “move to the cloud” from both on-site resources and traditional hosting providers.  We found that many companies would prefer to work within a closed system that gives you access to cloud functionality without having to learn how to control costs at the service, VM, or other micro-levels.

This group has so far wanted to focus on using the cloud versus running and using the cloud.  We are adding managed services formally now to accommodate these customers.

The hyper-converged hardware has fit this use case well.  Of note, ML and AI customers are also very interested in hardware components like GPUs and we will be adding those to our catalog in the coming months.

On-Demand OpenStack Cloud for Training and R&D Purposes

This was a use case we expected.  In days past, creating a high-quality OpenStack required a group of skilled System Engineers. This includes specialists in hardware, networks, security, and Linux. It is a stretch for a medium business to have these skills on staff and unlikely a small business will have more than 1 of them.

Even with the skilled group, most will not have experience with OpenStack. In order to learn to run a private cloud, the IT team has to convince their company to finance a “Pilot Program” of the potential cloud. Before on-demand OpenStack, those clouds can cost hundreds of thousands of dollars in server and network gear, plus 3-12 months’ worth of time.  And with that, many, maybe even the majority, of the pilots never turn into a production cloud.

Many enterprise-focused companies, like Redhat, Canonical, and Accenture, successfully help enterprises bridge that gap economically.  For smaller IT teams, they simply couldn’t access the benefits and cost savings of private cloud.

With the advent of the trivial on-demand private cloud providers, the two most considerable issues have been overcome.  Now, these users can learn with on-demand OpenStack and regardless of where the deployment goes, they have cut significant time and risk out of using OpenStack.

Proof of Concept of workloads from the Public Cloud

With the cost of the mega-clouds being so high, it is natural this use case is and will be significant.  At this time, we are not sure if these users will prefer to have a managed private cloud or will take the steps to assign staff in their company to become Cloud Operators.  

Currently, we are actively pursuing the latter as this gives the company the lowest cost options.  We also see improvements in the ease of being a private cloud Operator with key advancements like Containerized Control Planes.  A company with a reasonably skilled systems team can take on the Cloud Operators duties, increase their own company prowess, and save money.

Data Center Providers adding On-Demand Private Cloud as an offering

Data Center providers used to make significant portions of their revenue from smaller customers that often would just purchase ¼ and ½ racks.  Much of that type of business has moved away from direct purchases in data centers to either Mega-Clouds or to bare metal hosting providers.  

To offset this pattern, very large Data Center Providers that have significant resources to adapt their business model have been moving into the Cloud Provider space for some time.  For example, Equinix acquired Packet a few years ago and now Equinix offers “On Demand Metal”.

Smaller DC providers have not had the budget or engineering prowess to buy or create a cloud offering of their own.  As the on-demand open source private cloud technology matures, we expect and have already seen, significant interest from that sector. With the incredibly rich functionality supplied by OpenStack, a small DC provider could offer cloud products that compete with the mega clouds.

Public Cloud Providers using OpenStack Already

We are actively working on this use case as we feel enabling OpenStack Public Cloud Providers to more effectively compete with the mega-clouds is critical to the health of Open Source.

Hyper-converged and on-demand hardware plus fully private OpenStack opened a few doors for current OpenStack public cloud providers.  In the past, for a provider, offering an additional location meant running that location in the red for quite some time until the customer base at that location reached a certain scale.  Without significant resources, only a few locations were possible.

As OpenStack has great native functionality for Regions and for letting one OpenStack control all deployments, it is straightforward to add numerous small clouds in different geographic locations.  We specifically built small footprints that can scale up as demand requires. Thus the OpenStack Public Cloud provider can then offer many locations with a much smaller investment than before.

We have our own roadmap challenge to adding many locations as we must get critical mass, but our mission is to provide enough locations so any small OpenStack Public Cloud provider could match up with other large competitors and even the mega-clouds.

Next Steps

2021 is concentrated on a few areas.

  • Adding additional OpenStack-based functionality into the current hardware footprints.
  • Partnering with other Open Source friendly companies to offer best-of-breed tools for monitoring, disaster recovery, ML/AI automated operations, infrastructure automation, etc.
  • Market awareness that small on-demand OpenStack has arrived and smaller teams now have access to an Open Source alternative to the mega-clouds

If you are interested in what we are doing, please reach out to us or come explore On-Demand Private Cloud.

Tags:

As Edge Applications Multiply, OpenInfra Community Delivers StarlingX 5.0, Offering Cloud Infrastructure Stack for 5G, IoT

Everything carriers and enterprises need to deploy edge clouds for 5G, IoT is in open source StarlingX 5.0, available today; new features enhance security, operability and automation.

AUSTIN, Texas—June 2, 2021—StarlingX—the open source edge computing and IoT cloud platform optimized for low-latency and high-performance applications—is available in its 5.0 release today. StarlingX combines Ceph, OpenStack, Kubernetes and more to create a full-featured cloud software stack that provides everything carriers and enterprises need to deploy an edge cloud on a few servers or hundreds of them.

New features in StarlingX 5.0 include:

  • edgeworker nodes, which enable customized operating systems near and at the edge
  • security enhancements
  • improvements to orchestration and automation
  • upgrades to and containerization of integrated open source components

***Download StarlingX 5.0 at https://opendev.org/starlingx***

StarlingX is in production, driving top 5G deployments

T-Systems and Verizon rely on StarlingX for their edge and production vRAN use cases, and a growing set of organizations are also evaluating the project for production deployment.

“It’s been exciting to see the tremendous increase in StarlingX community activity, not only in terms of commercial adoptions and evaluations but also with respect to investments in the project by many different organizations and individual contributors,” said Paul Miller, chief technology officer, Wind River. 

“We’re also seeing now, as a result of early market adoption of this open source stack, the use of StarlingX in edge computing and industrial IoT solutions,” Miller continued. “With market research indicating that about 70% of compute will be moving towards the edge over the next five years or so, we foresee continued strong adoption and investment in StarlingX community activity. As an original contributor to the code base and a strong supporter of the project, we are encouraged to see business being driven by StarlingX. The kind of ecosystem development we’re seeing around StarlingX is exactly what you want to see in a thriving open source community.”

Key Features of StarlingX 5.0

To further support the low-latency and distributed cloud requirements of edge computing and industrial IoT use cases, the community prioritized these features in StarlingX 5.0:

  • Support for ‘edgeworker’ nodes, a new personality distinguished from ‘worker’ nodes. Edgeworker nodes are usually deployed close to an edge device, such as an I/O device, a camera, a servo motor or a sensor, to manage host-based enrollment. The ‘edgeworker’ personality is particularly suitable when users want a lightweight approach, deploying only a few agents on the nodes. With edgeworker nodes, users can enroll new operating systems and new server classes, which expands the possibility for new StarlingX use cases.
  • Support for Nvidia GPUs, enabling operators to do additional offload for those particular workloads that require GPU interacting, such as machine learning or other image-based processing. 
  • Support for FPGA image update orchestration. FPGA and acceleration are important features of edge systems. The new FPGA image update orchestration feature in StarlingX 5.0 improves operations, supporting automation across the distributed cluster. This gives users the option to deploy FPGA with orchestrations that are automated from end to end.
  • A PTP notification feature to further extend StarlingX’s support of Precision Time Protocol. Operators can now receive notifications about the PTP state and take action in case the system time is no longer in sync with the PTP clock source, which is critical for time-sensitive applications.
  • Vault integration for secret management, a security focused feature. Users want the ability to store and access secrets securely. These secrets can include credentials, encryption keys, API tokens and other data that should not be stored in plain text on a system. Vault, an open source project, provides the ability to encrypt and store secrets with access control via a range of authorization and access policy configurations. Vault’s features include dynamic secret generation, data encryption, leasing and renewal, revocation and audit/logging. The integration of Vault improves StarlingX’s security posture and encryption capabilities while maintaining manageability.

Other additions to the new version include:

  • Improvements to certification management to enhance automation
  • Containerized Ceph storage by using Rook
  • Support for Net-SNMP v3 for the fault management service
  • CephFS for cluster storage 
  • Container Image Signature Validation

Learn more about these and other features of StarlingX 5.0 in the community’s release notes.

OpenInfra Community Drives StarlingX Progress

The StarlingX project launched in 2018. Since then, there have been more than 10,000 commits from over 260 authors. Today’s 5.0 release added 1100 commits from 105 developers to those total numbers. The StarlingX community is actively collaborating with several other groups such as the OpenInfra Edge Computing Group, ONAP, Akraino and more.

After initial code for the project was contributed by Wind River and Intel to the Open Infrastructure Foundation, the active community of support for StarlingX has expanded to include 99Cloud, FiberHome, and China UnionPay, among others. China UnionPay, China Unicom, T-Systems and Verizon have become early adopters of the software.

Community Accolades for StarlingX 5.0   

“The StarlingX community is continuously making significant progress,” said Shuquan Huang, technical director, 99Cloud Inc. “We’re excited to see StarlingX 5.0 is available with a lot of enhancements and new features. StarlingX will be the key to meet the requirement of edge computing, and it’s time to use the latest StarlingX platform to build your edge cloud.”

“The StarlingX 5.0 release is a new stage for edge computing,” said Wang Hao, senior software engineer, FiberHome Enterprise. “There are some great features that have been introduced to the platform to enhance edge computing security and bring more flexibility to different types of nodes that will extend StarlingX’s application to more edge scenarios.  Fiberhome will continue to focus on edge computing techniques and work with the StarlingX community to bring more values to our users.”

Project Resources

About StarlingX

StarlingX is the open source edge computing and IoT cloud platform optimized for low latency and high performance applications. It provides a scalable and highly reliable edge infrastructure, tested and available as a complete stack. Applications include industrial IoT, telecom, video delivery and other ultra-low latency use cases. StarlingX ensures compatibility among diverse open source components and provides unique project components for fault management and service management, among others, to ensure high availability of user applications. StarlingX is the ready-for-deployment code base for edge implementations in scalable solutions. StarlingX is an Open Infrastructure Foundation project. www.starlingx.io 

###

Tags:

Xena vPTG Summaries

The OpenStack community had its third virtual Project Teams Gathering (PTG) in late April. Over 500 individuals and 49 teams across the globe met and collaborated at the vPTG. Since the event concluded, several of those teams have posted summaries of the discussions they have had and the decisions that were made during the PTG.

PTG Summaries

Cinder, Brian Rosmaita (PTL)

CloudKitty, Rafael Weingärtner

Cyborg, Xin-ran Wang (PTL)

Glance, Abhishek Kekane (PTL)

Interop,  Arkady Kanevsky

Kolla, Mark Goddard (PTL)

Kuryr, Michał Dulko

Manila, Goutham Pacha Ravi

Multi-arch SIG, Rico Lin

Neutron, Slawek Kaplonski (PTL)

Nova, Balazs Gibizer (PTL)

Openstack Ansible collection/modules, Sagi Shnaidman

OpenStack-Ansible, Dmitriy Rabotyagov (PTL)

OpenStack Technical Committee, Ghanshyam Mann (Technical Committee Member)

Quality Assurance, Martin Kopec (PTL)

Role-Based Access Control, Lance Bragstad

Scientific SIG

Telemetry, Matthias Runge (PTL)

TripleO, Marios Andreou (PTL)

Venus, Liye Pang(逄立业)

OpenInfra Live PTG Recap

Project leaders from OpenStack, Kata Containers, StarlingX, OpenStack Ironic, the Edge Computing Group, Scientific SIG, and Multi-Arch SIG provide recaps from discussions held at the PTG.

If you would like to get your post added to this list, please contact Helena Spease at [email protected]

Feedback from the PTG

Like the last vPTG, we provided an etherpad all throughout the event to collect feedback on how things went from registration to the last meeting. Please add any feedback you feel might be missing!

Tags:

Sardina Systems and Ambedded Technology announce global partnership to deliver scale-out enterprise Software-Defined Storage solution on Kubernetes

Sardina Systems, a leading OpenStack and Kubernetes platforms software vendor addressing the full lifecycle of clouds with pre-integrated operations tools, and Ambedded Technology, a software-defined storage company with expertise on Ceph storage and embedded arm platform, today announced they have partnered to deliver a highly efficient and modern Software Defined-Storage solution.

Companies looking for an agile, automated SDS Kubernetes based solution to replace the static and inefficient hardware can now embrace the innovative joint solution offered by Sardina Systems and Ambedded Technology partnership.

The result of the strategic partnership combines Sardinas award-winning FishOS solution – an OpenStack and Kubernetes cloud platform with zero-downtime operations, with Ambedded’s Unified Virtual Storage (UVS) Manager – a web-based graphical user interface (GUI) enabling administrators to simplify the way to manage, configure and monitor Ceph storage (SDS).

The Software Defined Storage solution is highly scalable and easy-to-operate, it is reducing the infrastructure capital investment and operational cost while achieving high availability, and it is helping to improve performance in both on-premise or hosted private cloud systems.

“We have teamed with Ambedded Technology to fulfill the enterprise customers’ needs for faster scalability, higher availability, greater flexibility, and efficiency allowing them to focus on application development and operations with a reduced operational cost. “

“What seemed to be impossible a few years ago, can now be just one click away for enterprises to benefit of Kubernetes and OpenStack clouds in a single platform offering customers a highly scalable, automated technology with operational tools and optimised infrastructure,” said Mihaela Constantinescu from Sardina Systems.

Thanks to both partners’ global coverage, the solution is available to the worldwide market enabling a scalable, flexible, and automated fully managed storage solution for today’s business and application demands. The applications are dynamically provisioned with the precise mix of capacity, performance, and security needed.

“We are delighted to announce our partnership with Sardina Systems. Together we can offer a wider range of turnkey solutions to our customers, from the infrastructure to the software-defined storage, which enables our enterprise customers to move to an efficient software-defined IT solution without facing the complicated integrations”, said Dominique SUN from Ambedded Technology.

The joint solution comes with broad benefits for customers, among which:

  • Easy to use, scale and manage storage
  • Lower initial investment and operational expenses, even at a massive scale
  • Automated operational tools for the entire lifecycle of OpenStack and Kubernetes cloud operations
  • Highly available and scalable platform
  • Exceptional performance
  • Affordable infrastructure cost

About Sardina Systems

Founded in 2014, Sardina Systems makes infrastructure invisible, elevating IT to focus on enterprise applications and services. FishOS natively converges server, storage, virtualization, and networking into a resilient, software-defined AI-based solution. Optimized performance, cloud flexibility, robust security, for all enterprise applications at any scale.

Sardina Systems has operations in Germany, Romania, Russia, Ukraine, and the UK.

About Ambedded Technology

Ambedded is a software-defined storage company with expertise in Linux OS, kernel, software-defined storage, embedded system, and Arm server.

With the purpose-built Ceph appliance, Ambedded team owns extensive experience to help customers to adopt Ceph solutions into versatile industries, such as Telecom, medical, military, edge datacenter, and HA required enterprise storage.

# # #

Tags: