OpenStack Developer Mailing List Digest July 2-22

SuccessBot Says

  • Notmyname: the 1.5 year long effort to get at-rest encryption in openstack swift has been finished. at-rest crypto has landed in master
  • stevemar: API reference documentation now shows keystone’s in-tree APIs!
  • Samueldemq: Keystone now supports Python 3.5
  • All

Troubleshooting and

  • Keystone team wants to do troubleshooting documents.
  • might be the right forum for this, but help is needed:
    • Keystone core should be able to moderate.
    • A top level interface than just tags. The page should have a series of questions and links to the discussions for that question.
  • There could also be a keystone-docs repo that would have:
    • FAQ troubleshooting
    • Install guides
    • Unofficial blog posts
    • How-to guides
  • We don’t want a static troubleshooting guide. We want people to be able to ask questions and link them to answers.
  • Full thread

Leadership Training Recap and Steps Forward

  • Colette Alexander has successfully organized leadership training in Ann Arbor, Michigan.
  • 17 people from the community attended. 8 of them from the TC.
  • Subjects:
    • servant leadership
    • Visioning
    • Stages of learning
    • Good practices for leading organizational change.
  • Reviews and reflections from the training have been overwhelmingly positive and some blogs started to pop up [1].
  • A smaller group of the 17 people after training met to discuss how some ideas presented might help the OpenStack community.
    • To more clearly define and accomplish that work, a stewardship working group has been proposed [2].
  • Because of the success, and 5 TC members weren’t able to attend, Colette is working to arrange a repeating offer.
  • Thanks to all who attended and the OpenStack Foundation who sponsored the training for everyone.
  • Full thread

Release Countdown For Week R-12, July 11-15

  • Focus:
    • Major feature work should be well under way as we approach the second milestone.
  • General notes:
    • We freeze release libraries between the third milestone and final release.
      • Only emergency bug fix updates are allowed during that period.
      • Prioritize any feature work that includes work in libraries.
  • Release actions:
    • Official projects following any of the cycle-based release models should propose beta 2 tags for their deliverables by July 14.
    • Review stable/liberty stable/mitaka branches for needed releases.
  • Important dates:
    • Newton 2 milestone: July 14
    • Library release freeze date starting R-6, Aug 25
    • Newton 3 milestone: September 1
  • Full thread

The Future of OpenStack Documentation

  • Current central documentation
    • Consistent structure
    • For operators and users
    • Some less technical audience use this to evaluate with various other cloud infrastructure offerings.
  • Project documentation trends today:
    • Few are contributing to central documentation
    • More are becoming independent with their own repository documentation.
    • An alarming number just don’t do any.
  • A potential solution: Move operator and user documentation into individual project repositories:
    • Project developers can contribute code and documentation in the same patch.
    • Project developers can work directly or with a liaison documentation team members to improve documentation during development.
    • The documentation team primarily focuses on organization/presentation of documentation and assisting projects.
  • Full thread


[1] –

[2] –


Get a high COA score. Win a Chromebook.

Are you ready to show off your OpenStack skills? The COA high score contest has begun! July 18th through September 11th, COA exam takers with the highest score each week will win a Chromebook from the OpenStack Foundation.


Entry is easy:

  1. Go to and read the exam materials
  2. Register and schedule your exam!


That’s it!
Winners will be notified via email. See the full contest rules below. Good luck!


Certified OpenStack Administrator Exam Contest

The Certified OpenStack Administrator Exam Contest (the “Contest”) is designed to encourage eligible individuals (“Entrant(s)” or “You”) to take the Certified OpenStack Administrator Exam (the “Exam”). OpenStack will choose the winner, and the prize will be awarded in accordance with these Official Rules (these “Rules”).

  1. BINDING AGREEMENT: In order to enter the Contest, you must agree to the Rules. Therefore, please read these Rules prior to entry to ensure you understand and agree. You agree that submission of an entry in the Contest constitutes agreement to these Rules. You may not submit an entry to the Contest and are not eligible to receive the prize described in these Rules unless you agree to these Rules. These Rules form a binding legal agreement between you and OpenStack with respect to the Contest.
  2. ELIGIBILITY: To be eligible to enter the Contest, an Entrant must be 18 years of age or older and eligible to take the Exam as set forth in the Exam Handbook (available for download on the Exam Website.  Contest is void where prohibited by law. Employees, directors, and officers of OpenStack and their immediate families (parents, siblings, children, spouses, and life partners of each, regardless of where they live) and members of the households (whether related or not) of such individuals are ineligible to participate in this Contest.
  3. SPONSOR: The Contest is sponsored by OpenStack Foundation (“OpenStack” or “Sponsor”), a Delaware non-stock, non-profit corporation with principal place of business at 1214 West 6th Street, Suite 205 Texas, Austin, TX 78703, USA.
  4. CONTEST PERIOD: The Contest begins on July 18, 2016 when posted to The OpenStack Blog ( and ends on September 11, 2016 11:59 Central Time (CT) Zone (“Contest Period”). All dates are subject to change.
  5. HOW TO ENTER: To enter the Contest, visit the Exam website located at (“Exam Website”) during the Contest Period, click “Get Started,” and follow the prompts to purchase and take the Exam.  You will be automatically entered into the Contest by completing the Exam during the Contest Period.  If you start but do not complete the Exam during the Contest Period, you will not be entered into the Contest. If you do not pass the exam, you will not be entered to win.  Additional entry information is available on The OpenStack Blog.  If you wish to opt out of the Contest, please email us at [email protected] LIMIT ONE (1) ENTRY PER ENTRANT. Subsequent entries will be disqualified.
  6. TAKING THE EXAM: To enter the Contest, you must complete the Exam.  You understand that in addition to these Rules, you must comply with all terms, conditions, rules, and requirements set by OpenStack and the Exam administrators for the Exam.  Information regarding the Exam is available on the Exam Website and in the Exam Handbook.
  7. SCORING: Exams will be scored in accordance with the Exam Handbook, available for download on the Exam Website.  The winner will be the individual that achieves the highest score on the Exam. In the event of a tie, the individual who completed the Exam in the shortest amount of time will be the winner.  For the purposes of the tie-breaker, Exam time will be measured from initial commencement of the Exam to final submission.  Time from purchase of Exam to commencement of Exam will not be considered.
  8. PRIZE: One winner will be selected.  The winner will receive a Toshiba Chromebook 2 – 2015 Edition (CB35-C3350) with an approximate retail value of $350 US Dollars.
  9. TAXES: AWARD OF PRIZE TO POTENTIAL WINNER IS SUBJECT TO THE EXPRESS REQUIREMENT THAT THEY SUBMIT TO OPENSTACK ALL DOCUMENTATION REQUESTED BY OPENSTACK TO PERMIT IT TO COMPLY WITH ALL APPLICABLE STATE, FEDERAL AND LOCAL TAX REPORTING. TO THE EXTENT PERMITTED BY LAW, ALL TAXES IMPOSED ON PRIZES ARE THE SOLE RESPONSIBILITY OF THE WINNER. In order to receive a prize, potential winner must submit tax documentation requested by OpenStack or otherwise required by applicable law, to OpenStack or a representative for OpenStack or the relevant tax authority, all as determined by applicable law. The potential winner is responsible for ensuring that they comply with all the applicable tax laws and filing requirements. If the potential winner fails to provide such documentation or comply with such laws, the prize may be forfeited and OpenStack may, in its sole discretion, select an alternate potential winner.
  10. GENERAL CONDITIONS: All federal, state and local laws and regulations apply. OpenStack reserves the right to disqualify any Entrant from the Contest if, in OpenStack’s sole discretion, it reasonably believes that the Entrant has attempted to undermine the legitimate operation of the Contest or Exam by cheating, deception, or other unfair playing practices or annoys, abuses, threatens or harasses any other entrants, or OpenStack. OpenStack retains all rights in the OpenStack products and services and entry into this Contest will in no case serve to transfer any OpenStack intellectual property rights to the Entrant.
  11. PRIVACY: Entrants agree and acknowledge that personal data submitted with an entry, including name and email address may be collected, processed, stored and otherwise used by OpenStack for the purposes of conducting and administering the Contest. All personal information that is collected from Entrants is subject to OpenStack’s Privacy Policy, located at Individuals submitting personal information in connection with the Contest have the right to request access, review, rectification or deletion of any personal data held by OpenStack in connection with the Contest by sending an email to OpenStack at [email protected] or writing to: Compliance Officer, OpenStack Foundation, P.O. Box 1903, Austin, TX 78767.
  12. PUBLICITY: By entering the Contest, Entrants agree to participate in any media or promotional activity resulting from the Contest as reasonably requested by OpenStack at OpenStack’s expense and agree and consent to use of their name and/or likeness by OpenStack. OpenStack will contact Entrants in advance of any OpenStack-sponsored media request.
  13. WARRANTY AND INDEMNITY: Entrants warrant that they will take the Exam in compliance with all terms, conditions, rules, and requirements set forth on the Exam Website and in the Exam Handbook. To the maximum extent permitted by law, Entrant indemnifies and agrees to keep indemnified Sponsor at all times from and against any liability, claims, demands, losses, damages, costs and expenses resulting from any act, default or omission of the Entrant and/or a breach of any warranty set forth herein. To the maximum extent permitted by law, Entrant agrees to defend, indemnify and hold harmless Sponsor from and against any and all claims, actions, suits or proceedings, as well as any and all losses, liabilities, damages, costs and expenses (including reasonable attorney’s fees) arising out of or accruing from: (i) any misrepresentation made by Entrant in connection with the Contest or Exam; (ii) any non-compliance by Entrant with these Rules; (iii) claims brought by persons or entities other than the parties to these Rules arising from or related to Entrant’s involvement with the Contest; (iv) acceptance, possession, misuse or use of any prize or participation in any Contest-related activity or participation in the Contest; (v) any malfunction or other problem with The OpenStack Blog or Exam Website in relation to the entry and participation in the Contest or completion of the Exam by Entrant; (vii) any error in the collection, processing, or retention of entry or voting information in relation to the entry and participation in the Contest or completion of the Exam by Entrant; or (viii) any typographical or other error in the printing, offering or announcement of any prize or winners in relation to the entry and participation in the Contest or completion of the Exam by Entrant.
  14. ELIMINATION: Any false information provided within the context of the Contest by Entrant concerning identity, email address, or non-compliance with these Rules or the like may result in the immediate elimination of the entrant from the Contest.
  15. INTERNET AND DISCLAIMER: OpenStack is not responsible for any malfunction of The OpenStack Blog or Exam Website or any late, incomplete, or mis-graded Exams due to system errors, hardware or software failures of any kind, lost or unavailable network connections, typographical or system/human errors and failures, technical malfunction(s) of any network, cable connections, satellite transmissions, servers or providers, or computer equipment, traffic congestion on the Internet or at The OpenStack Blog or Exam Website, or any combination thereof. OpenStack is not responsible for the policies, actions, or inactions of others, which might prevent Entrant from entering, participating, and/or claiming a prize in this Contest. Sponsor’s failure to enforce any term of these Rules will not constitute a waiver of that or any other provision. Sponsor reserves the right to disqualify Entrants who violate the Rules or interfere with this Contest in any manner. If an Entrant is disqualified, Sponsor reserves the right to terminate that Entrant’s eligibility to participate in the Contest.
  16. RIGHT TO CANCEL, MODIFY OR DISQUALIFY: If for any reason the Contest is not capable of running as planned, OpenStack reserves the right at its sole discretion to cancel, terminate, modify or suspend the Contest. OpenStack further reserves the right to disqualify any Entrant who tampers with the Exam process or any other part of the Contest, Exam, The OpenStack Blog, or Exam Website. Any attempt by an Entrant to deliberately damage any web site, including the The OpenStack Blog or Exam Website, or undermine the legitimate operation of the Contest or Exam is a violation of criminal and civil laws and should such an attempt be made, OpenStack reserves the right to seek damages from any such Entrant to the fullest extent of the applicable law.
  17. FORUM AND RECOURSE TO JUDICIAL PROCEDURES: These Rules shall be governed by, subject to, and construed in accordance with the laws of the State of Texas, United States of America, excluding all conflict of law rules. If any provision(s) of these Rules are held to be invalid or unenforceable, all remaining provisions hereof will remain in full force and effect. Exclusive venue for all disputes arising out of the Rules shall be brought in the state or federal courts of Travis County, Texas, USA and you agree not to bring an action in any other venue. To the extent permitted by law, the rights to litigate, seek injunctive relief or make any other recourse to judicial or any other procedure in case of disputes or claims resulting from or in connection with this Contest are hereby excluded, and Entrants expressly waive any and all such rights.
  18. WINNER’S LIST: The winner will be announced on The OpenStack Blog.  You may also request a list of winners after December 1, 2016 by sending a self-addressed stamped envelope to:

OpenStack Inc.

Attn: Contest Administrator

P.O. Box 1903

Austin, Texas 78767

(Residents of Vermont need not supply postage).


OpenStack turns 6: Community focuses on collaboration, growth

July 19 marks the 6th birthday of OpenStack, and we’re celebrating with the entire OpenStack community during July! OpenStack is an integration engine for diverse cloud technologies, fostering collaboration among emerging communities, and none of it would be possible without the quickly growing, global OpenStack community. There are now more than 54,000 community members, over 80 global user groups across 179 countries and more than 600 supporting companies. We think that deserves a worldwide celebration!

OS_6th_Sticker_Blue OS_6th_Sticker_White OS_6th_Sticker_GrayOS_6th_Sticker_A


We’ve invited all our user groups to celebrate with us. This month, more than 45 OpenStack birthday parties will be thrown all over the world – celebrating the OpenStack community!  We encourage everyone to find a birthday party in your area and join your fellow community members to toast each other on another great year! Don’t forget to share your pictures and memories using #OpenStack6Bday.

Coming soon! Check out SuperuserTV’s live interviews with user group coordinators around the word about how they are planning on celebrating Open Stack’s 6th Birthday!

Find a local celebration in your area:
Argentina – July 28
Atlanta, Georgia – July 21
Baden-Württemberg, Germany – TBD
Belgium – July 5
Brazil – July 16
Central Florida – July 19
Colorado – July 28
Durban – July 8
Frankfurt, Germany – TBD
Greece – July 13
Guadalajara, Mexico – July 15
Guatemala – July 21
Hong Kong – July 12
Italy – July 18
Japan – July 6-7
Kazan, Russia – July 28
Kentucky – July 28
Los Angeles – July 28
Malaysia – August 10
Manchester, UK – July 27
Minnesota – July 25
Morocco – July 29
Munich – July 19
Nairobi – July 16
New York City – July 13
Nigeria – July 8
North Carolina – July 21
Northern Virginia (NOVA) – July 28
Pakistan – July 27
Paris, France – July 5
Philadelphia, Pennsylvania – July 28
Romania – July 14
Philippines – July 22
San Francisco, California – July 14**
South Korea – July 21
St. Louis, Missouri – July 21
Sweden – July 6
Switzerland – July 7
Thailand – July 27-28 (tutorials); July 29 (launch party)
Toulouse, France – July 4
Tunisia – July 28
Turkey – July 13
Vietnam – July 30
Virginia – July 18
Washington DC – July 18

OpenStack Developer Mailing List Digest June 18-24

Status of the OpenStack Port to Python 3

  • The only projects not ported to Python 3 yet:
    • Nova (76%)
    • Trove (42%)
    • Swift (0%)
  • Number of projects already ported:
    • 19 Oslo Libraries
    • 4 development tools
    • 22 OpenStack Clients
    • 6 OpenStack Libraries (os-brick, taskflow, etc)
    • 12 OpenStack services approved by the TC
    • 17 OpenStack services (not approved by the TC)
  • Raw total: 80 projects
  • Technical Committee member Doug Hellmann would like the community to set a goal for Ocata to have Python 3 functional tests running for all projects.
  • Dropping support for Python 2 would be nice, but is a big step and shouldn’t distract from the goals of getting the remaining things to support Python 3.
    • Keep in mind OpenStack on PyPy which is using Python 2.7.
  • Full thread

Proposal: Architecture Working Group

  • OpenStack is a big system that we have debated what it actually is [1].
  • We want to be able to point to something and proud tell people “this is what we designed and implemented.”
    • For individual projects this is possible. Neutron can talk about their agents and drivers. Nova can talk about conductors that handle communication with compute nodes.
    • When we talk about how they interact with each other, it’s a coincidental mash of de-facto standards and specs. They don’t help someone make decisions when refactoring or adding on to the system.
  • Oslo and cross-project initiatives have brought some peace and order to implementation, but not the design process.
    • New ideas start largely in the project where they are needed most, and often conflict with similar decisions and ideas in other projects.
    • When things do come to a head these things get done in a piecemeal fashion, where it’s half done here, 1/3 over there, ¼ there, ¾ over there.
    • Maybe nova-compute should be isolated from Nova with an API Nova, Cinder and Neutron can talk to.
    • Maybe we should make the scheduler cross-project aware and capable of scheduling more than just Nova.
    • Maybe experimental groups should look at how some of this functionality could perhaps be delegated to non-OpenStack projects.
  • Clint Byrum would like to propose the creation of an Architecture Working Group.
    • A place for architects to share their designs and gain support across projects to move forward and ratify architectural decisions.
    • The group being largely seniors at companies involved and if done correctly can help prioritize this work by advocating for people/fellow engineers to actually make it ‘real’.
  • How to get inovlved:
    • Bi-weekly IRC meeting at a time convenient for the most interested individuals.
    • #openstack-architecture channel
    • Collaborate on the openstack-specs repo.
    • Clint is working on a first draft to submit for review next week.
  • Full thread

Release Countdown for Week R-15, Jun 20-24

  • Focus:
    • Teams should be working on new feature development and bug fixes.
  • General Notes:
    • Members of the release team will be traveling next week. This will result in delays in releases. Plan accordingly.
  • Release Actions:
    • Official independent projects should file information about historical releases using the openstack/releases repository so the team pages on are up to date.
    • Review stable/liberity and stable/mitaka branches for needed releases.
  • Important Dates:
    • Newton 2 milestone, July 14
    • Newton release schedule [2]

  • Full thread

Placement API WSGI Code – Let’s Just Use Flask

  • Maybe it’s better to use one of the WSGI frameworks used by the other OpenStack projects, instead of going in a completely new direction.
    • It will easier for other OpenStack contributors to become familiar with the new API placement API endpoint code if it uses Flask.
    • Flask has a very strong community and does stuff well that the OpenStack community could stop worrying about.
  • The amount of WSGI glue above Routes/Paste is pretty minimal in comparison to using a full web framework.
    • Template and session handling are things we don’t need. We’re a REST service, not web application.
  • Which frameworks are in use in Mitaka:
    • Falcon: 4 projects
    • Custom + routes: 12 projects
    • Pecan: 12 projects
    • Flask: 2 projects
    • 1 project
  • Full thread

[1] –

[2] –

OpenStack Developer Mailing List Digest May 14 to June 17

SuccessBot Says

  • Qiming: Senlin has completed API migration from WADL.
  • Mugsie: Kiall Fixed the gate – development can now continue!!!
  • notmyname: exactly 6 years ago today, Swift was put into production
  • kiall: DNS API reference is live [1].
  • sdague: Nova legacy v2 api code removed [2].
  • HenryG: Last remnant of oslo incubator removed from Neutron [3].
  • dstanek: I was able to perform a roundtrip between keystone and using my new SAML2 middleware!
  • Sdague: Nova now defaults to Glance v2 for image operations [4].
  • Ajaeger: First Project Specific Install Guide is published – congrats to the heat team!
  • Jeblair: There is no Jenkins, only Zuul.
  • All

Require A Level Playing Field for OpenStack Projects

  • Thierry Carrez proposes a new requirement [5] for OpenStack “official” projects.
  • An important characteristic of open collaboration grounds is they need to be a level playing field. No specific organization can be given an unfair advantage.
    • Projects that are blessed as “official” project teams need to operate in a fair manner. Otherwise they could be essentially a trojan horse for a given organization.
    • If in a given project, developers from one specific organization benefit from access specific knowledge or hardware, then the project should be rejected under the “open community” rule.
    • Projects like Cinder provide an interesting grey area, but as long as all drivers are in and there is a fully functional (and popular) open source implementation there is likely no specific organization considered as unfairly benefiting.
  • Neutron plugin targeting a specific piece of networking hardware would likely given an unfair advantage to developers of the hardware’s manufacturer (having access to hardware for testing and being able to see and make changes to its proprietary source code).
  • Open source projects that don’t meet the open community requirement can still exist in the ecosystem (developed using gerrit and openstack/* git repository, gate testing, but as an unofficial project.
  • Full thread

Add Option to Disable Some Strict Response Checking for Interoperability Testing

  • Nova introduced their API micro version change [6]
  • QA team adds strict API schema checking to Tempest to ensure no additional Nova API responses [7][8].
    • In the last year, three vendors participating in the OpenStack powered trademark program were impacted by this [9].
  • DefCore working group determines guidelines for the OpenStack powered program.
    • Includes capabilities with associated functional tests from Tempest that must pass.
    • There is a balance of future direction of development with lagging indicators of deployments and user adoption.
  • A member of the working group Chris Hoge would like to implement a temporary waiver for strict API checking requirements.
    • While this was discussed publicly in the developer community and took some time to implement. It still landed quickly, and broke several existing deployments overnight.
    • It’s not viable for downstream deployers to use older versions of Tempest that don’t have these strict response checking, due to the TC resolution passed [10] to advise DefCore to use Tempest as the single source of capability testing.
  • Proposal:
    • Short term:
      • there will be a blueprint and patch to Tempest that allows configuration of a grey-list of Nova APIs which strict response checking on additional properties will be disabled.
      • Using this code will emit a deprecation warning.
      • This will be removed 2017.01.
      • Vendors are required to submit the grey-list of APIs with additional response data that would be published to their marketplace entry.
    • Long term:
      • Vendors will be expected to work with upstream to update the API returning additional data.
      • The waiver would no longer be allowed after the release of 2017.01 guidleine.
  • Former QA PTL Matthew Treinish feels this a big step backwards.
    • Vendors who have implemented out of band extensions or injected additional things in responses believe that by doing so they’re interoperable. The API is not a place for vendor differentation.
    • Being a user of several clouds, random data in the response makes it more difficult to write code against. Which are the vendor specific responses?
  • Alternatives to not giving vendors more time in the market:
    • Having some vendors leave the the Powered program unnecessarily weakening it.
    • Force DefCore to adopt non-upstream testing, either as a fork or an independent test suite.
  • If the new enforcement policies had been applied by adding new tests to Tempest, then DefCore could have added them using it’s processes over a period of time and downstream deployers might have not had problems.
    • Instead behavior to a bunch of existing tests changed.
  • Tempest master today supports all currently supported stable branches.
    • Tags are made in the git repository support for a release is added/dropped.
      • Branchless Tempest was originally started back in Icehouse release and was implemented to enforce the API is the same across release boundaries.
  • If DefCore wants the lowest common denominator for Kilo, Liberty, and Mitaka there’s a tag for that [11]. For Juno, Kilo, Liberty the tag would be [12].
  • Full thread

There Is No Jenkins, Only Zuul

  • Since the inception of OpenStack, we have used Jenkins to perform testing and artifact building.
    • When we only had two git repositories, we have one Jenkins master and a few slaves. This was easy to maintain.
    • Things have grown significantly with 1,200 git repositories, 8,000 jobs spread across 8 Jenkins masters and 800 dynamic slave nodes.
  • Jenkins job builder [13] was created to create 8,000 jobs from a templated YAML.
  • Zuul [14] was created to drive project automation in directing our testing, running tens of thousands of jobs each day. Responding to:
    • Code reviews
    • Stacking potential changes to be tested together.
  • Zuul version 3 has major changes:
    • Easier to run jobs in multi-node environments
    • Easy to manage large number of jobs
    • Job variations
    • Support in-tree job configuration
    • Ability to define jobs using Ansible
  • While version 3 is still in development, it’s today capable of running our jobs entirely.
  • As of June 16th, we have turned off our last Jenkins master and all of our automation is being run by Zuul.
    • Jenkins job builder has contributors beyond OpenStack, and will be continued to be maintained by them.
  • Full thread

Languages vs. Scope of “OpenStack”

  • Where does OpenStack stop, and where does the wider open source community start? Two options:
    • If OpenStack is purely an “integration engine” to lower-level technologies (e.g. hypervisors, databases, block storage) the scope is limited and Python should be plenty and we don’t need to fragment our community.
    • If OpenStack is “whatever it takes to reach our mission”, then yes we need to add one language to cover lower-level/native optimization.
  • Swift PTL John Dickinson mentions defining the scope of OpenStack projects does not define the languages needed to implement them. The considerations are orthogonal.
    • OpenStack is defined as whatever is takes to fulfill the mission statement.
    • Defining “lower level” is very hard. Since the Nova API is listening to public network interfaces and coordinating with various services in a cluster, it lower level enough to consider optimizations.
  • Another approach is product-centric. “Lower-level pieces are OpenStack dependencies, rather that OpenStack itself.”
    • Not governed by the TC, and it can use any language and tool deemed necessary.
    • There are a large number of open source projects and libraries that OpenStack needs to fulfill its mission that are not “OpenStack”: Python, MySQL, KVM, Ceph, OpenvSwitch.
  • Do we want to be in the business of building data plane services that will run into Python limitations?
    • Control plane services are very unlikely to ever hit a scaling concern where rewriting in another language is needed.
  • Swift hit limitations in Python first because of the maturity of the project and they are now focused on this kind of optimization.
    • Glance (partially data plane) did hit this limit and mitigated by folks using Ceph and exposing that directly to Nova. So now Glance only cares about location and metadata. Dependencies like Ceph care about data plane.
  • The resolution for the Go programming language was discussed in previous Technical Committee meetings and was not passed [14]. John Dickinson and others do plan to carry another effort forward for Swift to have an exception for usage of the language.
  • Full thread


Technical Committee Highlights June 13, 2016

It has been a while since our last highlight post so this one is full of updates.

New release tag: cycle-trailing

This is a new addition to the set of tags describing the release models. This tag aims to allow specific projects for doing releases after the OpenStack release has been cut. This tag is useful for projects that need to wait for the “final” OpenStack release to be out. Some examples of these projects are Kolla, TripleO, Ansible, etc.

Reorganizing cross-project work coordination

The cross project team is the reference team when it comes to reviewing cross project specs. This resolution grants the cross project team approval rights on cross-project specs and therefore the ability to merge such specs without the Technical Committee’s intervention. This is a great step forward on the TC’s mission of enabling the community to be as autonomous as possible. This resolution recognizes reviewers of openstack-specs as a team.

Project additions and removals

– Addition of OpenStack Salt: This project brings in SaltStack formulas for installing and operating OpenStack cloud deployments. The main focus of the project is to setup development, testing, and production OpenStack deployments in easy, scalable and predictable way.

– Addition of OpenStack Vitrage: This project aims to organize, analyze and visualize OpenStack alarms & events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected.

– Addition of OpenStack Watcher: Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

– Removal of OpenStack Cue: Cue’s project team activity has dropped below what is expected of an official OpenStack project team. It was therefore removed from the list of official projects.

Recommendation on location of tests for DefCore verification

A new resolution has been merged in which it’s recommended for the DefCore team to use Tempest’s repository as the central repository for verification tests. During the summit, 2 different options were discussed as possible recommendations:

  • Use the tests within the Tempest git repository by themselves.
  • Add to those Tempest tests by allowing projects to host tests in their tree using Tempest’s plugin feature.

By recommending using the Tempest repository, the community will favor centralization of these tests, it’ll improve the collaboration on DefCore matters and it’ll also improve the consistency across the tests used for API verification.

Mission Statements Updates

On one hand Magnum has narrowed its mission statement after discussing it at the Austin summit. The team has decided Magnum should focus on managing container orchestration engines (COE) rather than managing containers lifecycle as well. On the other hand, Kuryr has expanded its mission statement to also include management of storage abstractions for containers.

Expanding technology choices in OpenStack projects

On the face of it, the request sounds simple. “Can we use golang in OpenStack?” asked of the TC in this governance patch review.

It’s a yes or no question. It sets us up for black and white definitions, even though the cascading ramifications are many for either answer.

Yes means less expertise sharing between projects as well as some isolation. Our hope is that certain technology decisions are made in the best interest of our community and The OpenStack way. We would trust projects to have a plan for all the operators and users who are affected by a technology choice. A Yes means trusting all our projects (over fifty-five currently) not to lose time by chasing the latest or doing useless rewrites, and believing that freedom of technology choice is more important than sharing common knowledge and expertise. For some, it means we are evolving and innovating as technologists.

A No vote here means that if you want to develop with another language, you should form your new language community outside of the OpenStack one. Even with a No vote, projects can still use our development infrastructure such as Mailing Lists, Gerrit, Zuul, and so on. A No vote on a language choice means that team’s deliverable is simply outside of the Technical Committee governance oversight, and not handled by our cross-project teams such as release, doc, quality. For the good of your user base, you should ensure all the technology ramifications that a yes vote would, but your team doesn’t need to work under TC oversight.

What about getting from No to Yes? Could it mean that we would like you to remain in the OpenStack community but please plugin parts that are not considering the entire community while being built.

We’ve discussed additional grey area answers. Here is the spectrum:

  • Yes, without limits.
  • Yes, but within limits outlined in our governance.
  • No, remember that it’s perfectly fine to have external dependencies written in other languages.
  • No, projects that don’t work within our technical standards don’t leverage the shared resources OpenStack offers so they can work outside of OpenStack.

We have dismissed the outer edge descriptions for Yes and No. We continued to discuss the inner Yes and inner No descriptions this week, with none of the options being really satisfactory. After lots of discussion, we came around to a No answer, abandoning the patch, while seeking input for getting to yes within limits.

Basically, our answer is about focusing on what we have in common, what defines us. It is in-line with the big-tent approach of defining an “OpenStack project” as being developed by a coherent community using the OpenStack Way. It’s about sharing more things. We tolerate and even embrace difference where it is needed, but that doesn’t mean that the project has to live within the tent. It can be a friendly neighbour rather than being in and resulting in breaking the tent into smaller sub-tents.

FAQ: Evolving the OpenStack Design Summit

As a result of community discussion, the OpenStack Foundation is evolving the format of the events it produces for the community starting in 2017. The proposal is to split the current Design Summit, which is held every six months as part of the main OpenStack Summit, into two parts: a “Forum” at the main Summit for cross-community discussions and user input (we call this the “what” discussions), and a separate “Project Teams Gathering” event for project team members to meet and get things done (the “how” discussions and sprinting). The intention is to alleviate the need for a separate mid-cycle, so development teams would continue to meet four times per year, twice with the community at large and twice in a smaller, more focused environment. The release cycle would also shift to create more space between the release and Summit. The change triggered a lot of fears and questions — the intent of this FAQ is to try to address them.

Note that we held a community town hall to explain the evolving format of the OpenStack Design Summit. You can watch this recording to learn more about it.

Q: How is the change helping upstream developers?


A: During the Summit week, upstream developers have a lot of different goals. We leverage the Summit to communicate new things (give presentations), learn new things (attend presentations), get feedback from users and operators over our last release, gather pain points and priorities for upcoming development, propose changes and see what the community thinks of them, recruit and on-board new team members, have essential cross-project discussions, meet with our existing project team members, kickstart the work on the new cycle, and get things done. There is just not enough time in 4 or 5 days to do all of that, so we usually drop half of those goals. Most will skip attending presentations. Some will abandon the idea of presenting. Some will drop cross-project discussions, resulting in them not having the critical mass of representation to actually serve their purpose. Some will drop out of their project team meeting to run somewhere else. The time conflicts make us jump between sessions, resulting in us being generally unavailable for listening to feedback, pain points, or newcomers. By the end of the week we are so tired we can’t get anything done. We need to free up time during the week. There are goals that can only be reached in the Summit setting, where all of our community is represented — we should keep those goals in the Summit week. There are goals that are better reached in a distraction-free setting — we should organize a separate event for them.

Q: What is the “Forum” ?


A: “Forum” is the codename for the part of the Design Summit (Ops+Devs) that would still happen at the main Summit event. It will primarily be focused on strategic discussions and planning for the next release (the “what”), essentially the start of the next release cycle even though development will not begin for another 3 months. We should still take advantage of having all of our community (Devs, Ops, End users…) represented to hold cross-community discussions there. That means getting feedback from users and operators over specific projects in our last release, gathering pain points and priorities for upcoming development, proposing changes and see what the community thinks of them, and recruiting and on-boarding new team members. We’d like to do that in a neutral space (rather than have separate “Ops” and “Dev” days) so that the discussion is not influenced by who owns the session. This event would happen at least two months after the previous release, to give users time to test and bring valuable feedback.

Q: What is the “Project Teams Gathering” ?


A: “Project Teams Gathering” is the codename for the part of the Design Summit that will now happen as a separate event. It will primarily provide space for project teams to make implementation decisions and start development work (the “how”). This is where we’d have essential cross-project discussions, meet with our existing project team members, generate shared understanding, kickstart the development work on the new cycle, and generally get things done. OpenStack project teams would be given separate rooms to meet for one or more days, in a loose format (no 40-min slots). If you self-identify as a member of a specific OpenStack project team, you should definitely join. If you are not part of a specific project team (or can’t pick one team), you could still come but your experience of the event would likely not be optimal, since the goal of the attendees at this event is to get things done, not listen to feedback or engage with newcomers. This event would happen around the previous release time, when developers are ready to fully switch development work to the new cycle.

Q: How is the change helping OpenStack as a whole?


A: Putting the larger Summit event further away from last release should dramatically improve the feedback loop. Currently, calling for feedback at the Summit is not working: users haven’t had time to use the last release at all, so most of the feedback we collect is based on the 7-month old previous release. It is also the wrong timing to push for new features: we are already well into the new cycle and it’s too late to add new priorities to the mix. The new position of the “Forum” event with respect to the development cycle should make it late enough to get feedback from the previous release and early enough to influence what gets done on the next cycle. By freeing up developers time during the Summit week, we also expect to improve the Summit experience for all attendees: developers will be more available to engage and listen. The technical content at the conference will also benefit from having more upstream developers available to give talks and participate in panels. Finally, placing the Summit further away from the release should help vendors prepare and announce products based on the latest release, making the Summit marketplace more attractive and relevant.


Q: When will the change happen ?


A: Summits are booked through 2017 already, so we can’t really move them anytime soon. Instead, we propose to stagger the release cycle. There are actually 7 months between Barcelona and Boston, so we have an opportunity there to stagger the cycle with limited impact. The idea would be to do a 5-month release cycle (between October and February), place our first project teams gathering end-of-February, then go back to 6-month cycles (March-August) and have the Boston Summit (and Forum) in the middle of it (May). So the change would kick in after Barcelona, in 2017. That gives us time to research venues and refine the new event format.

Q: What about mid-cycles ?


A: Mid-cycle sprints were organized separately by project teams as a way to gather team members and get things done. They grew in popularity as the distractions at the main Summit increased and it became hard for project teams to get together, build social bonds and generally be productive at the Design Summit. We hope that teams will get back that lost productivity and social bonding at the Project Teams Gathering, eliminating the need for separate team-specific sprints. 

Q: This Project Teams Gathering thing is likely to be a huge event too. How am I expected to be productive there? Or to be able to build social bonds with my small team?


A: Project Teams Gatherings are much smaller events compared to Summits (think 400-500 people rather than 7500). Project teams are placed in separate rooms, much like a co-located midcycle sprint. The only moment where everyone would meet would be around lunch. There would be no evening parties: project teams would be encouraged to organize separate team dinners and build strong social bonds.

Q: Does that new format actually help with cross-project work?

A: Cross-project work was unfortunately one of the things a lot of attendees dropped as they struggled with all the things they had to do during the Summit week. Cross-project workshops ended up being less and less productive, especially in getting to decisions or work produced. Mid-cycle sprints ended up being where the work can be done, but them being organized separately meant it is very costly for a member of a cross-project team (infrastructure, docs, QA, release management…) to attend them all. We basically set up our events in a way that made cross-project work prohibitively expensive, and then wondered why we had so much trouble recruiting people to do it. The new format ensures that we have a place to actually do cross-project work, without anything running against it, at the Project Teams Gathering. It dramatically reduces the number of places a Documentation person (for example) needs to travel to get some work done in-person with project team members. It gives project team members in vertical teams an option to break out of their silo and join such a cross-project team. It allows us to dedicate separate rooms to specific cross-project initiatives, beyond existing horizontal teams, to get specific cross-project work done.

Q: Are devs still needed at the main Summit?


A: Upstream developers are still very much needed at the main Summit. The Summit is (and always was) where the feedback loop happens. All project teams need to be represented there, to engage in planning, collect the feedback on their project, participate in cross-community discussions, reach out to new people and on-board new developers. We also very much want to have developers give presentations at the conference portion of the Summit (we actually expect that more of them will have free time to present at the conference, and that the technical content at the Summit will therefore improve). So yes, developers are still very much needed at the main Summit.

Q: My project team falls apart if the whole team doesn’t meet in person every 3 months. We used to do that at the Design Summit and at our separate mid-cycle project team meeting. I fear we’ll lose our ability to all get together every 3 months.


A: As mentioned earlier, we hope the Project Teams Gathering to be a lot more productive than the current Design Summit, reducing the need for mid-cycle sprints. That said, if you really still need to organize a separate mid-cycle sprint, you should definitely feel free to do so. We plan to provide space at the main Summit event so that you can hold mid-cycle sprints there and take advantage of the critical mass of people already around. If you decide to host a mid-cycle sprint, you should communicate that your team mid-cycle will be co-located with the Summit and that team member attendance is strongly encouraged.

Q: We are a small team. We don’t do mid-cycles currently. It feels like that with your change, we’ll have to travel to two events per cycle instead of one.


A: You need to decide if you feel the need to get the team all together to get some work done. If you do, you should participate (as a team) to the Project Teams Gathering. If you don’t, your team should skip it. The PTL and whoever is interested in cross-project work in your team should still definitely come to the Project Teams Gathering, but you don’t need to get every single team member there as you would not have a team room there. In all cases, your project wants to have some developers present at the Summit to engage with the rest of the community.

Q: The project I’m involved with is mostly driven by a single vendor, most of us work from the same office. I’m not sure it makes sense for all of us to travel to a remote location to get some work done !


A: You are right, it doesn’t. We’ll likely not provide specific space at the Project Teams Gathering for single-vendor project teams. The PTL (and whoever else is interested) should probably still come to the Project Teams Gathering to participate in cross-project work. And you should also definitely come to the Summit to engage with other organizations and contributors and increase your affiliation diversity to the point where you can take advantage of the Project Teams Gathering.

Q: I’m a translator, should I come to the Project Teams Gathering?


A: The I18n team is of course free to meet at the Project Teams Gathering. However, given the nature of the team (large number of members, geographically-dispersed, coming from all over our community, ops, devs, users), it probably makes sense to leverage the Summit to get translators together instead. The Summit constantly reaches out to new communities and countries, while the Project Teams Gathering is likely to focus on major developer areas. We’ll likely get better outreach results by holding I18n sessions or workshops at the “Forum” instead.

Q: A lot of people attend the current Design Summit to get a peek at how the sausage is made, which potentially results in getting involved. Doesn’t the new format jeopardize that on-boarding?


A: It is true that the Design Summit was an essential piece in showing how open design worked to the rest of the world. However that was always done at the expense of existing project team members productivity. Half the time in a 40-min session would be spent summarizing the history of the topic to newcomers. Lively discussions would be interrupted by people in the back asking that participants use the mike. We tried to separate fishbowls and workrooms at the Design Summit, to separate discussion/feedback sessions from team-members work sessions. That worked for a time, but people started working around it, making some work rooms look like overcrowded fishbowl rooms. In the end that makes up for a miserable experience for everyone involved and created a lot of community tension. In the new format, the “Forum” sessions will still allow people to witness open design at work, and since those are specifically set up as listening sessions (rather than “get things done” sessions), we’ll take time to engage and listen. We’ll free up time for specific on-boarding and education activities. Fewer scheduling conflicts during the week means we won’t be always running to our next sessions and will likely be more available to reach out to others in the hallway track.

Q: What about the Ops midcycle meetup?


A: The Ops meetups are still happening, and for the next year or two probably won’t change much at all. In May, the “Ops Meetups Team” was started to answer the questions about the future of the meetups, and also actively organize the upcoming ones. Part of that team’s definition: “Keeping the spirit of the ops meetup alive” – the meetups are run by ops, for ops and will continue to be. If you have interest, join the team and talk about the number and regional location of the meetups, as well as their content.

Q: What about ATC passes for the Summit?


A: The OpenStack Foundation gave discounted passes to a subset of upstream contributors (not all ATCs) who contributed in the last six months, so that they could more easily attend the Summit. We’ll likely change the model since we would be funding a second event, but will focus on minimizing costs for people who have to travel to both the Summit and the Project Teams Gathering. The initial proposal is to charge a minimal fee for the Project Teams Gathering (to better gauge attendance and help keep sponsorship presence to a minimum), and then anyone who was physically present at the Project Teams Gathering would receive a discount code to attend the next Summit. Something similar is also being looked into for contributors represented by the User Committee (eg. ops). At the same time, we’ll likely beef up the Travel Support Program so that we can get all the needed people at the right events.


OpenStack Developer Mailing List Digest May 7-13

SuccessBot Says

  • Pabelanger: bare-precise has been replaced by ubuntu-precise. Long live DIB
  • bknudson: The Keystone CLI is finally gone. Long live openstack CLI.
  • Jrichli: swift just merged a large effort that started over a year ago that will facilitate new capabilities – like encryption
  • All

Release Count Down for Week R-20, May 16-20

  • Focus
    • Teams should have published summaries from summit sessions to the openstack-dev mailing list.
    • Spec writing
    • Review priority features
  • General notes
    • Release announcement emails will be tagged with ‘new’ instead of ‘release’.
    • Release cycle model tags now say explicitly that the release team manages releases.
  • Release actions
    • Release liaisons should add their name and contact information to this list [1].
    • New liaisons should understand release instructions [2].
    • Project teams that want to change their release model should do so before the first milestone in R-18.
  • Important dates
    • Newton 1 milestone: R-18 June 2
    • Newton release schedule [3]

Collecting Our Wiki Use Cases

  • At the beginning, the community has been using a wiki [4] as a default community information publication platform.
  • There’s a struggle with:
    • Keeping things up-to-date.
    • Prevent from being vandalized.
    • Old processes.
    • Projects that no longer exist.
  • This outdated information can make it confusing to use, especially newcomers, that search engines will provides references to.
  • Various efforts have happened to push information out of the wiki to proper documentation guides like:
    • Infrastructure guide [5]
    • Project team guide [6]
  • Peer reviewed reference websites:
  • There are a lot of use cases that a wiki is a good solution, and we’ll likely need a lightweight publication platform like the wiki to cover those use cases.
  • If you use the wiki as part of your OpenStack work, make sure it’s captured in this etherpad [9].
  • Full thread

Supporting Go (continued)

  • Continuing from previous Dev Digest [10].
  • Before Go 1.5 (without the -buildmode=shared) it didn’t support the concept of shared libraries. As a consequence, when a library upgrades, the release team has to trigger rebuild for each and every reverse dependency.
  • In Swift’s case for looking at Go, it’s hard to write a network service in Python that shuffles data between the network and a block device and effectively use all the hardware available.
    • Fork()’ing child processes using cooperative concurrency via eventlet has worked well, but managing all async operations across many cores and many drives is really hard. There’s not an efficient interface in Python. We’re talking about efficient tools for the job at hand.
    • Eventlet, asyncio or anything else single threaded will have the same problem of the filesystem syscalls taking a long time and the call thread can be blocked. For example:
      • Call select()/epoll() to wait for something to happen with many file descriptors.
      • For each ready file descriptor, if the file descriptor socket is readable, read it, otherwise EWOULDBLOCK is returned by the kernel, and move on to the next file descriptor.
  • Designate team explains their reasons for Go:
    • MiniDNS is a component that due to the way it works, it’s difficult to make major improvements.
    • The component takes data and sends a zone transfer every time a record set gets updated. That is a full (AXFR) zone transfer where every record in a zone gets sent to each DNS server that end users can hit.
      • There is a DNS standard for incremental change, but it’s complex to implement, and can often end up reverting to a full zone transfer.
    • Ns[1-6] may be tens or hundreds of servers behind anycast Ips and load balancers.
    • Internal or external zones can be quite large. Think 200-300Mb.
    • A zone can have high traffic where a record is added/removed for each boot/destroy.
    • The Designate team is small, and after looking at options, judging the amount of developer hours available, a different language was decided.
  • Looking at Designates implementation, there are some low-hanging fruit improvements that can be made:
    • Stop spawning a thread per request.
    • Stop instantiating Oslo config object per request.
    • Avoid 3 round trips to the database every request. Majority of the request here is not spent in Python. This data should be trivial to cache since Designate knows when to invalidate the cache data.
      • In a real world use case, there could be a cache miss due to the shuffle order of multiple miniDNS servers.
  • The Designate team saw 10x improvement for 2000 record AXFR (without caching). Caching would probably speed up the Go implementation as well.
  • Go historically has poor performance with multiple cores [11].
    • Main advantages with the language could be CSP model.
    • Twisted does this very well, but we as a community consistently support eventlet. Eventlet has threaded programming model, which is poorly suited for Swift’s case.
    • PyPy got a 40% performance improvement over Cpython for a brenchmark of Twisted’s DNS component 6 years ago [12].
  • Right now our stack already has dependency C, Python, Erlang, Java, Shell, etc.
  • End users emphatically do not care about the language API servers were written in. They want stability, performance and features.
  • The Infrastructure related issues with Go for reliable builds, packaging, etc is being figured out [13]
  • Swift has tested running under PyPy with some conclusions:
    • Assuming production-ready stability of PyPy and OpenStack, everyone should use PyPy over CPython.
      • It’s just simply faster.
      • There are some garbage collector related issues to still work out in Swift’s usage.
      • There are a few patches that do a better job of socket handling in Swift that runs better under PyPy.
    • PyPy only helps when you’ve got a CPU-constrained environment.
    • The GoLang targets in Swift are related to effective thread management syscalls, and IO.
    • See a talk from the Austin Conference about this work [14].
  • Full thread


OpenStack Developer Mailing List Digest April 23 – May 6

Success Bot Says

  • Sdague: nova-network is deprecated [1]
  • Ajaeger: OpenStack content on Transifex has been removed, Zanata on has proven to be stable platform for all translators and thus Transifex is not needed anymore.
  • All

Backwards Compatibility Follow-up

  • Agreements from recent backwards compatibility for clients and libraries session:
    • Clients need to talk to all versions of OpenStack. Clouds.
    • Oslo libraries already do need to do backwards compatibility.
    • Some fraction of our deploys between 1% to 50% are trying to do in place upgrades where for example Nova is upgrade, and Neutron later. But now Neutron has to work with the upgraded libraries from the Nova upgrade.
  • Should we support in-place upgrades? If we do, we need at least 1 or more versions of compatibility where Mitaka Nova can run Newton Oslo+client libraries.
    • If we don’t support in-place upgrades, deployment methods must be architected to avoid ever encountering where a client or one of N services is going to be upgraded on a single python environment. All clients and services must be upgraded together on a single python environment or none.
  • If we decide to support in-place upgrades, we need to figure out how to test that effectively; its a linear growth with the number of stable releases we choose to support.
  • If we decide not to, we have no further requirement to have any cross-over compatibility between OpenStack releases.
  • We still have to be backwards compatible on individual changes.
  • Full thread

Installation Guide Plans for Newton

  • Continuing from a previous Dev Digest [2], big tent is growing and our documentation team would like for projects to maintain their own installation documentation. This should be done while still providing quality in valid working installation information and consistency the team strives for.
  • The installation guide team held a session at the summit that was packed and walked away with some solid goals to achieve for Newton.
  • Two issues being discussed:
    • What to do with the existing install guide.
    • Create a way for projects to write installation documentation in their own repository.
  • All guides will be rendered from individual repositories and appear in
  • The Documentation team has recommendations for projects writing their install guides:
    • Build on existing install guide architecture, so there is no reinventing the wheel.
    • Follow documentation conventions [3].
    • Use the same theme called openstackdocstheme.
    • Use the same distributions as the install guide does. Installation from source is an alternative.
    • Guides should be versioned.
    • RST is the preferred documentation format. RST is also easy for translations.
    • Common naming scheme: “X Service Install Guide” – where X is your service name.
  • The chosen URL format is
  • Plenty of work items to follow [4] and volunteers are welcome!
  • Full thread

Proposed Revision To Magnum’s Mission

  • From a summit discussion, there was a proposed revision to Magnum’s mission statement [5].
  • The idea is to narrow the scope of Magnum to allow the team to focus on making popular container orchestration engines (COE) software work great with OpenStack. Allowing users to setup fleets of cloud capacity managed by COE’s such as Swarm, Kubernetes, Mesos, etc.
  • Deprecate /containers resource from Magnum’s API. Any new project may take on the goal of creating an API service that abstracts one or more COE’s.
  • Full thread

Supporting the Go Programming Language

  • The Swift community has a git branch feature/hummingbird that contains some parts of Swift reimplemented in Go. [6]
  • The goal is to have a reasonably read-to-merge feature branch ready by the Barcelona summit. Shortly after the summit, the plan is to merge the Go code into master.
  • An amended Technical Committee resolution will follow to suggest Go as a supported language in OpenStack projects [7].
  • Some Technical Committee members have expressed wanting to see technical benefits that outweigh the community fragmentation and increase in infrastructure tasks that result from adding that language.
  • Some open questions:
    • How do we run unit tests?
    • How do we provide code coverage?
    • How do we manage dependencies?
    • How do we build source packages?
    • Should we build binary packages in some format?
    • How to manage in tree documentation?
    • How do we handle log and message string translations?
    • How will DevStack install the project as part of a gate job?
  • Designate is also looking into moving a single component into Go.
    • It would be good to have two cases to help avoid baking any project specific assumptions into testing and building interfaces.
  • Full thread

Release Countdown for Week R-21, May 9-13

  • Focus
    • Teams should be focusing on wrapping up incomplete work left over from the end of the Mitaka cycle.
    • Announce plans from the summit.
    • Completing specs and blueprints.
  • General Notes
    • Project teams that want to change their release model tag should do so before the Newton-1 milestone. This can be done by submitting a patch to governance repository in the projects.yaml file.
    • Release announcement emails are being proposed to have their tag switched from “release” to “newrel” [8].
  • Release Actions
    • Release liaisons should add their name to and contact information to this list [9].
    • Release liaisons should have their IRC clients join #openstack-release.
  • Important Dates
    • Newton 1 Milestone: R-18 June 2nd
    • Newton release schedule [10]
  • Full thread

Discussion of Image Building in Trove

  • A common question the Trove team receives from new users is how and where to get guest images to experiment with Trove.
    • Documentation exists in multiple places for this today [11][12], but things can still be improved.
  • Trove has a spec proposal [13] for using libguestfs approach to building images instead of using the current diskimage-builder (DIB).
    • All alternatives should be equivalent and interchangable.
    • Trove already has elements for all supported databases using DIB, but these elements are not packaged for customer use. Doing this would be a small effort of providing an element to install the guest agent software from a fixed location.
    • We should understand the deficiencies if any in DIBof switching tool chains. This can be be based on Trove and Sahara’s experiences.
  • The OpenStack Infrastructure team has been using DIB successfully for a while as it is a flexible tool.
    • By default Nova disables file injection [14]
    • DevStack doesn’t allow you to enable Nova file injection, and hard sets it off [15].
    • Allows to bootstrap with yum of debootstrap
    • Pick the filesystem for an existing image.
  • Lets fix the problems with DIB that Trove is having and avoid reinventing the wheel.
  • What are the problems with DIB, and how do they prevent Trove/Sahara users from building images today?
    • Libguestfs manipulates images in a clean helper VM created by libguestfs in a predictable way.
      • Isolation is something DIB gives up in order to provide speed/lower resource usage.
    • In-place image manipulation can occur (package installs, configuration declarations) without uncompressing or recompressing an entire image.
      • It’s trivial to make a DIB element which modifies an existing image and making it in-place.
    • DIB scripts’ configuration settings passed in freeform environment variables can be difficult to understand document for new users. Libguestfs demands more formal formal parameter passing.
    • Ease of “just give me an image. I don’t care about twiddling knobs”.
      • OpenStack Infra team already has a wrapper for this [16].
  • Sahara has support for several image generation-related cases:
    • Packing an image pre-cluster spawn in Nova.
    • Building clusters from a “clean” operating system image post-Nova spawn.
    • Validating images after Nova spawn.
  • In a Sahara summit session, there was a discussed plan to use libguestfs rather than DIB with an intent to define a linear, idempotent set of steps to package images for any plugin.
  • Having two sets of image building code to maintain would be a huge downside.
  • What’s stopping us a few releases down the line deciding that libguestfs doesn’t perform well and we decide on a new tool? Since DIB is an OpenStack project, Trove should consider support a standard way of building images.
  • Trove summit discussion resulted in agreement of advancing the image builder by making it easier to build guest images leveraging DIB.
    • Project repository proposals have been made [17][18]
  • Full thread


OpenStack Developer Mailing List Digest April 9-22

Success Bot Says

  • Clarkb: infra team redeployed Gerrit on a new larger server. Should serve reviews with fewer 500 errors.
  • danpb: wooohoooo, finally booted a real VM using nova + os-vif + openvswitch + privsep
  • neiljerram: Neutron routed networks spec was merged today; great job Carl + everyone else who contributed!
  • Sigmavirus24: Hacking 0.11.0 is the first release of the project in over a year.
  • Stevemar: dtroyer just released openstackclient 2.4.0 – now with more network commands \o/
  • odyssey4me: OpenStack-Ansible Mitaka 13.0.1 has been released!
  • All

One Platform – Containers/Bare Metal?

  • From the unofficial board meeting [1], an interest topic came up of how to truly support containers and bare metal under a common API with virtual machines.
  • We want to underscore how OpenStack has an advantage by being able to provide both virtual machines and bare metal as two different resources, when the “but the cloud should sentiment arises.
  • The discussion around “supporting containers” was different and was not about Nova providing them.
    • Instead work with communities on making OpenStack the best place to run things like Kubernetes and Docker swarm.
  • We want to be supportive of bare metal and containers, but the way we want to be supportive is different for
  • In the past, a common compute API was contemplated for Magnum, however, it was understood that the API would result in the lowest common denominator of all compute types, and exceedingly complex interface.
    • Projects like Trove that want to offer these compute choices without adding complexity within their own project can utilize solutions with Nova in deploying virtual machines, bare metal and containers (libvirt-lxc).
  • Magnum will be having a summit session [2] to discuss if it makes sense to build a common abstraction layer for Kubernetes, Docker swarm and Mesos.
  • There are expressed opinions that both native APIs and LCD APIs can co-exist.
    • Trove being an example of a service that doesn’t need everything a native API would give.
    • Migrate the workload from VM to container.
    • Support hybrid deployment (VMs & containers) of their application.
    • Bring containers (in Magnum bays) to a Heat template, and enable connections between containers and other OpenStack resources
    • Support containers to Horizon
    • Send container metrics to Ceilometer
    • Portable experience across container solutions.
    • Some people just want a container and don’t want the complexities of others (COEs, bays, baymodels, etc.)
  • Full thread

Delimiter, the Quota Management Library Proposal

  • At this point, there is a fair amount of objections to developing a service to manage quotas for all services. We will be discussing the development of a library that services will use to manage their own quotas with.
  • You don’t need a serializable isolation level. Just use a compare-and-update with retries strategy. This will prevent even multiple writers from oversubscribing any resource with an isolation level.
    • The “generation” field in the inventories table is what allows multiple writer to ensure a consistent view of the data without needing to rely on heavy lock-based semantics in relational database management systems.
  • Reservation doesn’t belong in quota library.
    • Reservations is concept of a time to claim of some resource.
    • Quota checking is returning whether a system right now can handle a request right now to claim a set of resources.
  • Key aspects of the Delimiter Library:
    • It’s a library, not a service.
    • Impose limits on resource consumptions.
    • Will not be responsible for rate limiting.
    • Will not maintain data for resources. Projects will take care of keeping/maintaining data for the resources and resource consumption.
    • Will not have a concept of reservations.
    • Will fetch project quota from respective projects.
    • Will take into consideration of a project being flat or nested.
  • Delimiter will rely on the concept of generation-id to guarantee sequencing. Generation-id gives a point in time view of resource usage in a project. Project consuming delimiter will need to provide this information while checking or consuming quota. At present Nova [3] has the concept of generation-id.
  • Full thread

Newton Release Management Communication

  • Volunteers filling PTL and liaison positions are responsible for ensuring communication between project teams happen smoothly.
  • Email, for announcements and asynchronous communication.
    • The release team will use the “[release]” topic tag in the openstack-dev mailing list.
    • Doug Hellmann will send countdown emails with weekly updates on:
      • focuses
      • tasks
      • important upcoming dates
    • Configure your mail clients accordingly so that these messages are visible.
  • IRC, for time-sensitive interactions.
    • You should have an IRC bouncer setup and made available in the #openstack-release channel on freenode. You should definitely be in there during deadline periods (the week before the week of each deadline).
  • Written documentation, for relatively stable information.
    • The release team has published the schedule for the Newton cycle [4].
    • If your project has something unique to add to the release schedule, send patches to the openstack/release repository.
  • Please ensure the release liaison for your project hasthe time and ability to handle the communication necessary to manage your release.
  • Our release milestones and deadlines are date-based, not feature-based. When the date passes, so does the milestone. If you miss it, you miss it. A few projects ran into problems during Mitaka because of missed communications.
  • Full thread

OpenStack Client Slowness

  • In profiling the nova help command, it was noticed there was a bit of time spent in the pkg_resource module and it’s use of pyparsing. Could we avoid a startup penalty by not having to startup a new python interpreter for each command we run?
    • In tracing Devstack today with a particular configuration, it was noticed that the openstack and neutron command run 140 times. If each one of those has a 1.5s overhead, we could potentially save 3 ½ minutes off Devstack execution time.
    • As a proof of concept Daniel Berrange created an openstack-server command which listens on a unix socket for requests and then invokes or OpenStackComputeShell.main or The nova, neutron and openstack commands would then call to this openstack-server command.
    • Devstack results without this tweak:
      • real 21m34.050s
      • user 7m8.649s
      • sys 1m57.865s
    • Destack results with this tweak:
      • real 17m47.059s
      • user 3m51.087s
      • sys 1m42.428s
  • Some notes from Dean Troyer for those who are interested in investigating this further:
    • OpenStack Client does not any project client until it’s actually needed to make a rest call.
    • Timing on a help command includes a complete scan of all entry points to generate the list of commands.
    • The –time option lists all REST calls that properly go through our TimingSession object. That should all of them unless a library doesn’t use the session it is given.
    • Interactive mode can be useful to get timing on just the setup/teardown process without actually running a command.
  • Full thread

Input Needed On Summit Discussion About Global Requirements

  • Co-installability of big tent project is a huge cost in energy spent. Service isolation with containers, virtual environments or different hosts allow avoiding having to solve this problem.
  • All-in-one installations today for example are supported because of development environments using Devstack.
  • Just like with the backwards compatibility library and client discussion, OpenStack service co-existing on the same host may share the same dependencies. Today we don’t guarantee things will work if you upgrade Nova to Newton and it upgrades shared client/libraries with Cinder service at Mitaka.
  • Devstack using virtual environments is pretty much already there. Due to operator feedback, this was stopped.
  • Traditional distributions rely on the community being mindful of shared dependency versions across services, so that it’s possible to use apt/yum tools to install OpenStack easily.
    • According to the 2016 OpenStack user survey, 56% of deployments are using “unmodified packages from the operating systems”. [4]
  • Other distributions are starting to support container-based packages where one version of a library at a time will go away.
    • Regardless the benefit of global requirements [5] will provide us a mechanism to encourage dependency convergence.
      • Limits knowledge required to operate OpenStack.
      • Facilitates contributors jumping from one code base to another.
      • Checkpoint for license checks.
      • Reduce overall security exposure by limiting code we rely on.
    • Some feel this is a regression to the days of not having reliable packaging management. Containers could be lagging/missing critical security patches for example.
  • Full thread