OpenStack Developer Mailing List Digest May 14 to June 17

SuccessBot Says

  • Qiming: Senlin has completed API migration from WADL.
  • Mugsie: Kiall Fixed the gate – development can now continue!!!
  • notmyname: exactly 6 years ago today, Swift was put into production
  • kiall: DNS API reference is live [1].
  • sdague: Nova legacy v2 api code removed [2].
  • HenryG: Last remnant of oslo incubator removed from Neutron [3].
  • dstanek: I was able to perform a roundtrip between keystone and using my new SAML2 middleware!
  • Sdague: Nova now defaults to Glance v2 for image operations [4].
  • Ajaeger: First Project Specific Install Guide is published – congrats to the heat team!
  • Jeblair: There is no Jenkins, only Zuul.
  • All

Require A Level Playing Field for OpenStack Projects

  • Thierry Carrez proposes a new requirement [5] for OpenStack “official” projects.
  • An important characteristic of open collaboration grounds is they need to be a level playing field. No specific organization can be given an unfair advantage.
    • Projects that are blessed as “official” project teams need to operate in a fair manner. Otherwise they could be essentially a trojan horse for a given organization.
    • If in a given project, developers from one specific organization benefit from access specific knowledge or hardware, then the project should be rejected under the “open community” rule.
    • Projects like Cinder provide an interesting grey area, but as long as all drivers are in and there is a fully functional (and popular) open source implementation there is likely no specific organization considered as unfairly benefiting.
  • Neutron plugin targeting a specific piece of networking hardware would likely given an unfair advantage to developers of the hardware’s manufacturer (having access to hardware for testing and being able to see and make changes to its proprietary source code).
  • Open source projects that don’t meet the open community requirement can still exist in the ecosystem (developed using gerrit and openstack/* git repository, gate testing, but as an unofficial project.
  • Full thread

Add Option to Disable Some Strict Response Checking for Interoperability Testing

  • Nova introduced their API micro version change [6]
  • QA team adds strict API schema checking to Tempest to ensure no additional Nova API responses [7][8].
    • In the last year, three vendors participating in the OpenStack powered trademark program were impacted by this [9].
  • DefCore working group determines guidelines for the OpenStack powered program.
    • Includes capabilities with associated functional tests from Tempest that must pass.
    • There is a balance of future direction of development with lagging indicators of deployments and user adoption.
  • A member of the working group Chris Hoge would like to implement a temporary waiver for strict API checking requirements.
    • While this was discussed publicly in the developer community and took some time to implement. It still landed quickly, and broke several existing deployments overnight.
    • It’s not viable for downstream deployers to use older versions of Tempest that don’t have these strict response checking, due to the TC resolution passed [10] to advise DefCore to use Tempest as the single source of capability testing.
  • Proposal:
    • Short term:
      • there will be a blueprint and patch to Tempest that allows configuration of a grey-list of Nova APIs which strict response checking on additional properties will be disabled.
      • Using this code will emit a deprecation warning.
      • This will be removed 2017.01.
      • Vendors are required to submit the grey-list of APIs with additional response data that would be published to their marketplace entry.
    • Long term:
      • Vendors will be expected to work with upstream to update the API returning additional data.
      • The waiver would no longer be allowed after the release of 2017.01 guidleine.
  • Former QA PTL Matthew Treinish feels this a big step backwards.
    • Vendors who have implemented out of band extensions or injected additional things in responses believe that by doing so they’re interoperable. The API is not a place for vendor differentation.
    • Being a user of several clouds, random data in the response makes it more difficult to write code against. Which are the vendor specific responses?
  • Alternatives to not giving vendors more time in the market:
    • Having some vendors leave the the Powered program unnecessarily weakening it.
    • Force DefCore to adopt non-upstream testing, either as a fork or an independent test suite.
  • If the new enforcement policies had been applied by adding new tests to Tempest, then DefCore could have added them using it’s processes over a period of time and downstream deployers might have not had problems.
    • Instead behavior to a bunch of existing tests changed.
  • Tempest master today supports all currently supported stable branches.
    • Tags are made in the git repository support for a release is added/dropped.
      • Branchless Tempest was originally started back in Icehouse release and was implemented to enforce the API is the same across release boundaries.
  • If DefCore wants the lowest common denominator for Kilo, Liberty, and Mitaka there’s a tag for that [11]. For Juno, Kilo, Liberty the tag would be [12].
  • Full thread

There Is No Jenkins, Only Zuul

  • Since the inception of OpenStack, we have used Jenkins to perform testing and artifact building.
    • When we only had two git repositories, we have one Jenkins master and a few slaves. This was easy to maintain.
    • Things have grown significantly with 1,200 git repositories, 8,000 jobs spread across 8 Jenkins masters and 800 dynamic slave nodes.
  • Jenkins job builder [13] was created to create 8,000 jobs from a templated YAML.
  • Zuul [14] was created to drive project automation in directing our testing, running tens of thousands of jobs each day. Responding to:
    • Code reviews
    • Stacking potential changes to be tested together.
  • Zuul version 3 has major changes:
    • Easier to run jobs in multi-node environments
    • Easy to manage large number of jobs
    • Job variations
    • Support in-tree job configuration
    • Ability to define jobs using Ansible
  • While version 3 is still in development, it’s today capable of running our jobs entirely.
  • As of June 16th, we have turned off our last Jenkins master and all of our automation is being run by Zuul.
    • Jenkins job builder has contributors beyond OpenStack, and will be continued to be maintained by them.
  • Full thread

Languages vs. Scope of “OpenStack”

  • Where does OpenStack stop, and where does the wider open source community start? Two options:
    • If OpenStack is purely an “integration engine” to lower-level technologies (e.g. hypervisors, databases, block storage) the scope is limited and Python should be plenty and we don’t need to fragment our community.
    • If OpenStack is “whatever it takes to reach our mission”, then yes we need to add one language to cover lower-level/native optimization.
  • Swift PTL John Dickinson mentions defining the scope of OpenStack projects does not define the languages needed to implement them. The considerations are orthogonal.
    • OpenStack is defined as whatever is takes to fulfill the mission statement.
    • Defining “lower level” is very hard. Since the Nova API is listening to public network interfaces and coordinating with various services in a cluster, it lower level enough to consider optimizations.
  • Another approach is product-centric. “Lower-level pieces are OpenStack dependencies, rather that OpenStack itself.”
    • Not governed by the TC, and it can use any language and tool deemed necessary.
    • There are a large number of open source projects and libraries that OpenStack needs to fulfill its mission that are not “OpenStack”: Python, MySQL, KVM, Ceph, OpenvSwitch.
  • Do we want to be in the business of building data plane services that will run into Python limitations?
    • Control plane services are very unlikely to ever hit a scaling concern where rewriting in another language is needed.
  • Swift hit limitations in Python first because of the maturity of the project and they are now focused on this kind of optimization.
    • Glance (partially data plane) did hit this limit and mitigated by folks using Ceph and exposing that directly to Nova. So now Glance only cares about location and metadata. Dependencies like Ceph care about data plane.
  • The resolution for the Go programming language was discussed in previous Technical Committee meetings and was not passed [14]. John Dickinson and others do plan to carry another effort forward for Swift to have an exception for usage of the language.
  • Full thread


Technical Committee Highlights June 13, 2016

It has been a while since our last highlight post so this one is full of updates.

New release tag: cycle-trailing

This is a new addition to the set of tags describing the release models. This tag aims to allow specific projects for doing releases after the OpenStack release has been cut. This tag is useful for projects that need to wait for the “final” OpenStack release to be out. Some examples of these projects are Kolla, TripleO, Ansible, etc.

Reorganizing cross-project work coordination

The cross project team is the reference team when it comes to reviewing cross project specs. This resolution grants the cross project team approval rights on cross-project specs and therefore the ability to merge such specs without the Technical Committee’s intervention. This is a great step forward on the TC’s mission of enabling the community to be as autonomous as possible. This resolution recognizes reviewers of openstack-specs as a team.

Project additions and removals

– Addition of OpenStack Salt: This project brings in SaltStack formulas for installing and operating OpenStack cloud deployments. The main focus of the project is to setup development, testing, and production OpenStack deployments in easy, scalable and predictable way.

– Addition of OpenStack Vitrage: This project aims to organize, analyze and visualize OpenStack alarms & events, yield insights regarding the root cause of problems and deduce their existence before they are directly detected.

– Addition of OpenStack Watcher: Watcher’s goal is to provide a flexible and scalable resource optimization service for multi-tenant OpenStack-based clouds.

– Removal of OpenStack Cue: Cue’s project team activity has dropped below what is expected of an official OpenStack project team. It was therefore removed from the list of official projects.

Recommendation on location of tests for DefCore verification

A new resolution has been merged in which it’s recommended for the DefCore team to use Tempest’s repository as the central repository for verification tests. During the summit, 2 different options were discussed as possible recommendations:

  • Use the tests within the Tempest git repository by themselves.
  • Add to those Tempest tests by allowing projects to host tests in their tree using Tempest’s plugin feature.

By recommending using the Tempest repository, the community will favor centralization of these tests, it’ll improve the collaboration on DefCore matters and it’ll also improve the consistency across the tests used for API verification.

Mission Statements Updates

On one hand Magnum has narrowed its mission statement after discussing it at the Austin summit. The team has decided Magnum should focus on managing container orchestration engines (COE) rather than managing containers lifecycle as well. On the other hand, Kuryr has expanded its mission statement to also include management of storage abstractions for containers.

Expanding technology choices in OpenStack projects

On the face of it, the request sounds simple. “Can we use golang in OpenStack?” asked of the TC in this governance patch review.

It’s a yes or no question. It sets us up for black and white definitions, even though the cascading ramifications are many for either answer.

Yes means less expertise sharing between projects as well as some isolation. Our hope is that certain technology decisions are made in the best interest of our community and The OpenStack way. We would trust projects to have a plan for all the operators and users who are affected by a technology choice. A Yes means trusting all our projects (over fifty-five currently) not to lose time by chasing the latest or doing useless rewrites, and believing that freedom of technology choice is more important than sharing common knowledge and expertise. For some, it means we are evolving and innovating as technologists.

A No vote here means that if you want to develop with another language, you should form your new language community outside of the OpenStack one. Even with a No vote, projects can still use our development infrastructure such as Mailing Lists, Gerrit, Zuul, and so on. A No vote on a language choice means that team’s deliverable is simply outside of the Technical Committee governance oversight, and not handled by our cross-project teams such as release, doc, quality. For the good of your user base, you should ensure all the technology ramifications that a yes vote would, but your team doesn’t need to work under TC oversight.

What about getting from No to Yes? Could it mean that we would like you to remain in the OpenStack community but please plugin parts that are not considering the entire community while being built.

We’ve discussed additional grey area answers. Here is the spectrum:

  • Yes, without limits.
  • Yes, but within limits outlined in our governance.
  • No, remember that it’s perfectly fine to have external dependencies written in other languages.
  • No, projects that don’t work within our technical standards don’t leverage the shared resources OpenStack offers so they can work outside of OpenStack.

We have dismissed the outer edge descriptions for Yes and No. We continued to discuss the inner Yes and inner No descriptions this week, with none of the options being really satisfactory. After lots of discussion, we came around to a No answer, abandoning the patch, while seeking input for getting to yes within limits.

Basically, our answer is about focusing on what we have in common, what defines us. It is in-line with the big-tent approach of defining an “OpenStack project” as being developed by a coherent community using the OpenStack Way. It’s about sharing more things. We tolerate and even embrace difference where it is needed, but that doesn’t mean that the project has to live within the tent. It can be a friendly neighbour rather than being in and resulting in breaking the tent into smaller sub-tents.

FAQ: Evolving the OpenStack Design Summit

As a result of community discussion, the OpenStack Foundation is evolving the format of the events it produces for the community starting in 2017. The proposal is to split the current Design Summit, which is held every six months as part of the main OpenStack Summit, into two parts: a “Forum” at the main Summit for cross-community discussions and user input (we call this the “what” discussions), and a separate “Project Teams Gathering” event for project team members to meet and get things done (the “how” discussions and sprinting). The intention is to alleviate the need for a separate mid-cycle, so development teams would continue to meet four times per year, twice with the community at large and twice in a smaller, more focused environment. The release cycle would also shift to create more space between the release and Summit. The change triggered a lot of fears and questions — the intent of this FAQ is to try to address them.

Note that we held a community town hall to explain the evolving format of the OpenStack Design Summit. You can watch this recording to learn more about it.

Q: How is the change helping upstream developers?


A: During the Summit week, upstream developers have a lot of different goals. We leverage the Summit to communicate new things (give presentations), learn new things (attend presentations), get feedback from users and operators over our last release, gather pain points and priorities for upcoming development, propose changes and see what the community thinks of them, recruit and on-board new team members, have essential cross-project discussions, meet with our existing project team members, kickstart the work on the new cycle, and get things done. There is just not enough time in 4 or 5 days to do all of that, so we usually drop half of those goals. Most will skip attending presentations. Some will abandon the idea of presenting. Some will drop cross-project discussions, resulting in them not having the critical mass of representation to actually serve their purpose. Some will drop out of their project team meeting to run somewhere else. The time conflicts make us jump between sessions, resulting in us being generally unavailable for listening to feedback, pain points, or newcomers. By the end of the week we are so tired we can’t get anything done. We need to free up time during the week. There are goals that can only be reached in the Summit setting, where all of our community is represented — we should keep those goals in the Summit week. There are goals that are better reached in a distraction-free setting — we should organize a separate event for them.

Q: What is the “Forum” ?


A: “Forum” is the codename for the part of the Design Summit (Ops+Devs) that would still happen at the main Summit event. It will primarily be focused on strategic discussions and planning for the next release (the “what”), essentially the start of the next release cycle even though development will not begin for another 3 months. We should still take advantage of having all of our community (Devs, Ops, End users…) represented to hold cross-community discussions there. That means getting feedback from users and operators over specific projects in our last release, gathering pain points and priorities for upcoming development, proposing changes and see what the community thinks of them, and recruiting and on-boarding new team members. We’d like to do that in a neutral space (rather than have separate “Ops” and “Dev” days) so that the discussion is not influenced by who owns the session. This event would happen at least two months after the previous release, to give users time to test and bring valuable feedback.

Q: What is the “Project Teams Gathering” ?


A: “Project Teams Gathering” is the codename for the part of the Design Summit that will now happen as a separate event. It will primarily provide space for project teams to make implementation decisions and start development work (the “how”). This is where we’d have essential cross-project discussions, meet with our existing project team members, generate shared understanding, kickstart the development work on the new cycle, and generally get things done. OpenStack project teams would be given separate rooms to meet for one or more days, in a loose format (no 40-min slots). If you self-identify as a member of a specific OpenStack project team, you should definitely join. If you are not part of a specific project team (or can’t pick one team), you could still come but your experience of the event would likely not be optimal, since the goal of the attendees at this event is to get things done, not listen to feedback or engage with newcomers. This event would happen around the previous release time, when developers are ready to fully switch development work to the new cycle.

Q: How is the change helping OpenStack as a whole?


A: Putting the larger Summit event further away from last release should dramatically improve the feedback loop. Currently, calling for feedback at the Summit is not working: users haven’t had time to use the last release at all, so most of the feedback we collect is based on the 7-month old previous release. It is also the wrong timing to push for new features: we are already well into the new cycle and it’s too late to add new priorities to the mix. The new position of the “Forum” event with respect to the development cycle should make it late enough to get feedback from the previous release and early enough to influence what gets done on the next cycle. By freeing up developers time during the Summit week, we also expect to improve the Summit experience for all attendees: developers will be more available to engage and listen. The technical content at the conference will also benefit from having more upstream developers available to give talks and participate in panels. Finally, placing the Summit further away from the release should help vendors prepare and announce products based on the latest release, making the Summit marketplace more attractive and relevant.


Q: When will the change happen ?


A: Summits are booked through 2017 already, so we can’t really move them anytime soon. Instead, we propose to stagger the release cycle. There are actually 7 months between Barcelona and Boston, so we have an opportunity there to stagger the cycle with limited impact. The idea would be to do a 5-month release cycle (between October and February), place our first project teams gathering end-of-February, then go back to 6-month cycles (March-August) and have the Boston Summit (and Forum) in the middle of it (May). So the change would kick in after Barcelona, in 2017. That gives us time to research venues and refine the new event format.

Q: What about mid-cycles ?


A: Mid-cycle sprints were organized separately by project teams as a way to gather team members and get things done. They grew in popularity as the distractions at the main Summit increased and it became hard for project teams to get together, build social bonds and generally be productive at the Design Summit. We hope that teams will get back that lost productivity and social bonding at the Project Teams Gathering, eliminating the need for separate team-specific sprints. 

Q: This Project Teams Gathering thing is likely to be a huge event too. How am I expected to be productive there? Or to be able to build social bonds with my small team?


A: Project Teams Gatherings are much smaller events compared to Summits (think 400-500 people rather than 7500). Project teams are placed in separate rooms, much like a co-located midcycle sprint. The only moment where everyone would meet would be around lunch. There would be no evening parties: project teams would be encouraged to organize separate team dinners and build strong social bonds.

Q: Does that new format actually help with cross-project work?

A: Cross-project work was unfortunately one of the things a lot of attendees dropped as they struggled with all the things they had to do during the Summit week. Cross-project workshops ended up being less and less productive, especially in getting to decisions or work produced. Mid-cycle sprints ended up being where the work can be done, but them being organized separately meant it is very costly for a member of a cross-project team (infrastructure, docs, QA, release management…) to attend them all. We basically set up our events in a way that made cross-project work prohibitively expensive, and then wondered why we had so much trouble recruiting people to do it. The new format ensures that we have a place to actually do cross-project work, without anything running against it, at the Project Teams Gathering. It dramatically reduces the number of places a Documentation person (for example) needs to travel to get some work done in-person with project team members. It gives project team members in vertical teams an option to break out of their silo and join such a cross-project team. It allows us to dedicate separate rooms to specific cross-project initiatives, beyond existing horizontal teams, to get specific cross-project work done.

Q: Are devs still needed at the main Summit?


A: Upstream developers are still very much needed at the main Summit. The Summit is (and always was) where the feedback loop happens. All project teams need to be represented there, to engage in planning, collect the feedback on their project, participate in cross-community discussions, reach out to new people and on-board new developers. We also very much want to have developers give presentations at the conference portion of the Summit (we actually expect that more of them will have free time to present at the conference, and that the technical content at the Summit will therefore improve). So yes, developers are still very much needed at the main Summit.

Q: My project team falls apart if the whole team doesn’t meet in person every 3 months. We used to do that at the Design Summit and at our separate mid-cycle project team meeting. I fear we’ll lose our ability to all get together every 3 months.


A: As mentioned earlier, we hope the Project Teams Gathering to be a lot more productive than the current Design Summit, reducing the need for mid-cycle sprints. That said, if you really still need to organize a separate mid-cycle sprint, you should definitely feel free to do so. We plan to provide space at the main Summit event so that you can hold mid-cycle sprints there and take advantage of the critical mass of people already around. If you decide to host a mid-cycle sprint, you should communicate that your team mid-cycle will be co-located with the Summit and that team member attendance is strongly encouraged.

Q: We are a small team. We don’t do mid-cycles currently. It feels like that with your change, we’ll have to travel to two events per cycle instead of one.


A: You need to decide if you feel the need to get the team all together to get some work done. If you do, you should participate (as a team) to the Project Teams Gathering. If you don’t, your team should skip it. The PTL and whoever is interested in cross-project work in your team should still definitely come to the Project Teams Gathering, but you don’t need to get every single team member there as you would not have a team room there. In all cases, your project wants to have some developers present at the Summit to engage with the rest of the community.

Q: The project I’m involved with is mostly driven by a single vendor, most of us work from the same office. I’m not sure it makes sense for all of us to travel to a remote location to get some work done !


A: You are right, it doesn’t. We’ll likely not provide specific space at the Project Teams Gathering for single-vendor project teams. The PTL (and whoever else is interested) should probably still come to the Project Teams Gathering to participate in cross-project work. And you should also definitely come to the Summit to engage with other organizations and contributors and increase your affiliation diversity to the point where you can take advantage of the Project Teams Gathering.

Q: I’m a translator, should I come to the Project Teams Gathering?


A: The I18n team is of course free to meet at the Project Teams Gathering. However, given the nature of the team (large number of members, geographically-dispersed, coming from all over our community, ops, devs, users), it probably makes sense to leverage the Summit to get translators together instead. The Summit constantly reaches out to new communities and countries, while the Project Teams Gathering is likely to focus on major developer areas. We’ll likely get better outreach results by holding I18n sessions or workshops at the “Forum” instead.

Q: A lot of people attend the current Design Summit to get a peek at how the sausage is made, which potentially results in getting involved. Doesn’t the new format jeopardize that on-boarding?


A: It is true that the Design Summit was an essential piece in showing how open design worked to the rest of the world. However that was always done at the expense of existing project team members productivity. Half the time in a 40-min session would be spent summarizing the history of the topic to newcomers. Lively discussions would be interrupted by people in the back asking that participants use the mike. We tried to separate fishbowls and workrooms at the Design Summit, to separate discussion/feedback sessions from team-members work sessions. That worked for a time, but people started working around it, making some work rooms look like overcrowded fishbowl rooms. In the end that makes up for a miserable experience for everyone involved and created a lot of community tension. In the new format, the “Forum” sessions will still allow people to witness open design at work, and since those are specifically set up as listening sessions (rather than “get things done” sessions), we’ll take time to engage and listen. We’ll free up time for specific on-boarding and education activities. Fewer scheduling conflicts during the week means we won’t be always running to our next sessions and will likely be more available to reach out to others in the hallway track.

Q: What about the Ops midcycle meetup?


A: The Ops meetups are still happening, and for the next year or two probably won’t change much at all. In May, the “Ops Meetups Team” was started to answer the questions about the future of the meetups, and also actively organize the upcoming ones. Part of that team’s definition: “Keeping the spirit of the ops meetup alive” – the meetups are run by ops, for ops and will continue to be. If you have interest, join the team and talk about the number and regional location of the meetups, as well as their content.

Q: What about ATC passes for the Summit?


A: The OpenStack Foundation gave discounted passes to a subset of upstream contributors (not all ATCs) who contributed in the last six months, so that they could more easily attend the Summit. We’ll likely change the model since we would be funding a second event, but will focus on minimizing costs for people who have to travel to both the Summit and the Project Teams Gathering. The initial proposal is to charge a minimal fee for the Project Teams Gathering (to better gauge attendance and help keep sponsorship presence to a minimum), and then anyone who was physically present at the Project Teams Gathering would receive a discount code to attend the next Summit. Something similar is also being looked into for contributors represented by the User Committee (eg. ops). At the same time, we’ll likely beef up the Travel Support Program so that we can get all the needed people at the right events.


OpenStack Developer Mailing List Digest May 7-13

SuccessBot Says

  • Pabelanger: bare-precise has been replaced by ubuntu-precise. Long live DIB
  • bknudson: The Keystone CLI is finally gone. Long live openstack CLI.
  • Jrichli: swift just merged a large effort that started over a year ago that will facilitate new capabilities – like encryption
  • All

Release Count Down for Week R-20, May 16-20

  • Focus
    • Teams should have published summaries from summit sessions to the openstack-dev mailing list.
    • Spec writing
    • Review priority features
  • General notes
    • Release announcement emails will be tagged with ‘new’ instead of ‘release’.
    • Release cycle model tags now say explicitly that the release team manages releases.
  • Release actions
    • Release liaisons should add their name and contact information to this list [1].
    • New liaisons should understand release instructions [2].
    • Project teams that want to change their release model should do so before the first milestone in R-18.
  • Important dates
    • Newton 1 milestone: R-18 June 2
    • Newton release schedule [3]

Collecting Our Wiki Use Cases

  • At the beginning, the community has been using a wiki [4] as a default community information publication platform.
  • There’s a struggle with:
    • Keeping things up-to-date.
    • Prevent from being vandalized.
    • Old processes.
    • Projects that no longer exist.
  • This outdated information can make it confusing to use, especially newcomers, that search engines will provides references to.
  • Various efforts have happened to push information out of the wiki to proper documentation guides like:
    • Infrastructure guide [5]
    • Project team guide [6]
  • Peer reviewed reference websites:
  • There are a lot of use cases that a wiki is a good solution, and we’ll likely need a lightweight publication platform like the wiki to cover those use cases.
  • If you use the wiki as part of your OpenStack work, make sure it’s captured in this etherpad [9].
  • Full thread

Supporting Go (continued)

  • Continuing from previous Dev Digest [10].
  • Before Go 1.5 (without the -buildmode=shared) it didn’t support the concept of shared libraries. As a consequence, when a library upgrades, the release team has to trigger rebuild for each and every reverse dependency.
  • In Swift’s case for looking at Go, it’s hard to write a network service in Python that shuffles data between the network and a block device and effectively use all the hardware available.
    • Fork()’ing child processes using cooperative concurrency via eventlet has worked well, but managing all async operations across many cores and many drives is really hard. There’s not an efficient interface in Python. We’re talking about efficient tools for the job at hand.
    • Eventlet, asyncio or anything else single threaded will have the same problem of the filesystem syscalls taking a long time and the call thread can be blocked. For example:
      • Call select()/epoll() to wait for something to happen with many file descriptors.
      • For each ready file descriptor, if the file descriptor socket is readable, read it, otherwise EWOULDBLOCK is returned by the kernel, and move on to the next file descriptor.
  • Designate team explains their reasons for Go:
    • MiniDNS is a component that due to the way it works, it’s difficult to make major improvements.
    • The component takes data and sends a zone transfer every time a record set gets updated. That is a full (AXFR) zone transfer where every record in a zone gets sent to each DNS server that end users can hit.
      • There is a DNS standard for incremental change, but it’s complex to implement, and can often end up reverting to a full zone transfer.
    • Ns[1-6] may be tens or hundreds of servers behind anycast Ips and load balancers.
    • Internal or external zones can be quite large. Think 200-300Mb.
    • A zone can have high traffic where a record is added/removed for each boot/destroy.
    • The Designate team is small, and after looking at options, judging the amount of developer hours available, a different language was decided.
  • Looking at Designates implementation, there are some low-hanging fruit improvements that can be made:
    • Stop spawning a thread per request.
    • Stop instantiating Oslo config object per request.
    • Avoid 3 round trips to the database every request. Majority of the request here is not spent in Python. This data should be trivial to cache since Designate knows when to invalidate the cache data.
      • In a real world use case, there could be a cache miss due to the shuffle order of multiple miniDNS servers.
  • The Designate team saw 10x improvement for 2000 record AXFR (without caching). Caching would probably speed up the Go implementation as well.
  • Go historically has poor performance with multiple cores [11].
    • Main advantages with the language could be CSP model.
    • Twisted does this very well, but we as a community consistently support eventlet. Eventlet has threaded programming model, which is poorly suited for Swift’s case.
    • PyPy got a 40% performance improvement over Cpython for a brenchmark of Twisted’s DNS component 6 years ago [12].
  • Right now our stack already has dependency C, Python, Erlang, Java, Shell, etc.
  • End users emphatically do not care about the language API servers were written in. They want stability, performance and features.
  • The Infrastructure related issues with Go for reliable builds, packaging, etc is being figured out [13]
  • Swift has tested running under PyPy with some conclusions:
    • Assuming production-ready stability of PyPy and OpenStack, everyone should use PyPy over CPython.
      • It’s just simply faster.
      • There are some garbage collector related issues to still work out in Swift’s usage.
      • There are a few patches that do a better job of socket handling in Swift that runs better under PyPy.
    • PyPy only helps when you’ve got a CPU-constrained environment.
    • The GoLang targets in Swift are related to effective thread management syscalls, and IO.
    • See a talk from the Austin Conference about this work [14].
  • Full thread


OpenStack Developer Mailing List Digest April 23 – May 6

Success Bot Says

  • Sdague: nova-network is deprecated [1]
  • Ajaeger: OpenStack content on Transifex has been removed, Zanata on has proven to be stable platform for all translators and thus Transifex is not needed anymore.
  • All

Backwards Compatibility Follow-up

  • Agreements from recent backwards compatibility for clients and libraries session:
    • Clients need to talk to all versions of OpenStack. Clouds.
    • Oslo libraries already do need to do backwards compatibility.
    • Some fraction of our deploys between 1% to 50% are trying to do in place upgrades where for example Nova is upgrade, and Neutron later. But now Neutron has to work with the upgraded libraries from the Nova upgrade.
  • Should we support in-place upgrades? If we do, we need at least 1 or more versions of compatibility where Mitaka Nova can run Newton Oslo+client libraries.
    • If we don’t support in-place upgrades, deployment methods must be architected to avoid ever encountering where a client or one of N services is going to be upgraded on a single python environment. All clients and services must be upgraded together on a single python environment or none.
  • If we decide to support in-place upgrades, we need to figure out how to test that effectively; its a linear growth with the number of stable releases we choose to support.
  • If we decide not to, we have no further requirement to have any cross-over compatibility between OpenStack releases.
  • We still have to be backwards compatible on individual changes.
  • Full thread

Installation Guide Plans for Newton

  • Continuing from a previous Dev Digest [2], big tent is growing and our documentation team would like for projects to maintain their own installation documentation. This should be done while still providing quality in valid working installation information and consistency the team strives for.
  • The installation guide team held a session at the summit that was packed and walked away with some solid goals to achieve for Newton.
  • Two issues being discussed:
    • What to do with the existing install guide.
    • Create a way for projects to write installation documentation in their own repository.
  • All guides will be rendered from individual repositories and appear in
  • The Documentation team has recommendations for projects writing their install guides:
    • Build on existing install guide architecture, so there is no reinventing the wheel.
    • Follow documentation conventions [3].
    • Use the same theme called openstackdocstheme.
    • Use the same distributions as the install guide does. Installation from source is an alternative.
    • Guides should be versioned.
    • RST is the preferred documentation format. RST is also easy for translations.
    • Common naming scheme: “X Service Install Guide” – where X is your service name.
  • The chosen URL format is
  • Plenty of work items to follow [4] and volunteers are welcome!
  • Full thread

Proposed Revision To Magnum’s Mission

  • From a summit discussion, there was a proposed revision to Magnum’s mission statement [5].
  • The idea is to narrow the scope of Magnum to allow the team to focus on making popular container orchestration engines (COE) software work great with OpenStack. Allowing users to setup fleets of cloud capacity managed by COE’s such as Swarm, Kubernetes, Mesos, etc.
  • Deprecate /containers resource from Magnum’s API. Any new project may take on the goal of creating an API service that abstracts one or more COE’s.
  • Full thread

Supporting the Go Programming Language

  • The Swift community has a git branch feature/hummingbird that contains some parts of Swift reimplemented in Go. [6]
  • The goal is to have a reasonably read-to-merge feature branch ready by the Barcelona summit. Shortly after the summit, the plan is to merge the Go code into master.
  • An amended Technical Committee resolution will follow to suggest Go as a supported language in OpenStack projects [7].
  • Some Technical Committee members have expressed wanting to see technical benefits that outweigh the community fragmentation and increase in infrastructure tasks that result from adding that language.
  • Some open questions:
    • How do we run unit tests?
    • How do we provide code coverage?
    • How do we manage dependencies?
    • How do we build source packages?
    • Should we build binary packages in some format?
    • How to manage in tree documentation?
    • How do we handle log and message string translations?
    • How will DevStack install the project as part of a gate job?
  • Designate is also looking into moving a single component into Go.
    • It would be good to have two cases to help avoid baking any project specific assumptions into testing and building interfaces.
  • Full thread

Release Countdown for Week R-21, May 9-13

  • Focus
    • Teams should be focusing on wrapping up incomplete work left over from the end of the Mitaka cycle.
    • Announce plans from the summit.
    • Completing specs and blueprints.
  • General Notes
    • Project teams that want to change their release model tag should do so before the Newton-1 milestone. This can be done by submitting a patch to governance repository in the projects.yaml file.
    • Release announcement emails are being proposed to have their tag switched from “release” to “newrel” [8].
  • Release Actions
    • Release liaisons should add their name to and contact information to this list [9].
    • Release liaisons should have their IRC clients join #openstack-release.
  • Important Dates
    • Newton 1 Milestone: R-18 June 2nd
    • Newton release schedule [10]
  • Full thread

Discussion of Image Building in Trove

  • A common question the Trove team receives from new users is how and where to get guest images to experiment with Trove.
    • Documentation exists in multiple places for this today [11][12], but things can still be improved.
  • Trove has a spec proposal [13] for using libguestfs approach to building images instead of using the current diskimage-builder (DIB).
    • All alternatives should be equivalent and interchangable.
    • Trove already has elements for all supported databases using DIB, but these elements are not packaged for customer use. Doing this would be a small effort of providing an element to install the guest agent software from a fixed location.
    • We should understand the deficiencies if any in DIBof switching tool chains. This can be be based on Trove and Sahara’s experiences.
  • The OpenStack Infrastructure team has been using DIB successfully for a while as it is a flexible tool.
    • By default Nova disables file injection [14]
    • DevStack doesn’t allow you to enable Nova file injection, and hard sets it off [15].
    • Allows to bootstrap with yum of debootstrap
    • Pick the filesystem for an existing image.
  • Lets fix the problems with DIB that Trove is having and avoid reinventing the wheel.
  • What are the problems with DIB, and how do they prevent Trove/Sahara users from building images today?
    • Libguestfs manipulates images in a clean helper VM created by libguestfs in a predictable way.
      • Isolation is something DIB gives up in order to provide speed/lower resource usage.
    • In-place image manipulation can occur (package installs, configuration declarations) without uncompressing or recompressing an entire image.
      • It’s trivial to make a DIB element which modifies an existing image and making it in-place.
    • DIB scripts’ configuration settings passed in freeform environment variables can be difficult to understand document for new users. Libguestfs demands more formal formal parameter passing.
    • Ease of “just give me an image. I don’t care about twiddling knobs”.
      • OpenStack Infra team already has a wrapper for this [16].
  • Sahara has support for several image generation-related cases:
    • Packing an image pre-cluster spawn in Nova.
    • Building clusters from a “clean” operating system image post-Nova spawn.
    • Validating images after Nova spawn.
  • In a Sahara summit session, there was a discussed plan to use libguestfs rather than DIB with an intent to define a linear, idempotent set of steps to package images for any plugin.
  • Having two sets of image building code to maintain would be a huge downside.
  • What’s stopping us a few releases down the line deciding that libguestfs doesn’t perform well and we decide on a new tool? Since DIB is an OpenStack project, Trove should consider support a standard way of building images.
  • Trove summit discussion resulted in agreement of advancing the image builder by making it easier to build guest images leveraging DIB.
    • Project repository proposals have been made [17][18]
  • Full thread


OpenStack Developer Mailing List Digest April 9-22

Success Bot Says

  • Clarkb: infra team redeployed Gerrit on a new larger server. Should serve reviews with fewer 500 errors.
  • danpb: wooohoooo, finally booted a real VM using nova + os-vif + openvswitch + privsep
  • neiljerram: Neutron routed networks spec was merged today; great job Carl + everyone else who contributed!
  • Sigmavirus24: Hacking 0.11.0 is the first release of the project in over a year.
  • Stevemar: dtroyer just released openstackclient 2.4.0 – now with more network commands \o/
  • odyssey4me: OpenStack-Ansible Mitaka 13.0.1 has been released!
  • All

One Platform – Containers/Bare Metal?

  • From the unofficial board meeting [1], an interest topic came up of how to truly support containers and bare metal under a common API with virtual machines.
  • We want to underscore how OpenStack has an advantage by being able to provide both virtual machines and bare metal as two different resources, when the “but the cloud should sentiment arises.
  • The discussion around “supporting containers” was different and was not about Nova providing them.
    • Instead work with communities on making OpenStack the best place to run things like Kubernetes and Docker swarm.
  • We want to be supportive of bare metal and containers, but the way we want to be supportive is different for
  • In the past, a common compute API was contemplated for Magnum, however, it was understood that the API would result in the lowest common denominator of all compute types, and exceedingly complex interface.
    • Projects like Trove that want to offer these compute choices without adding complexity within their own project can utilize solutions with Nova in deploying virtual machines, bare metal and containers (libvirt-lxc).
  • Magnum will be having a summit session [2] to discuss if it makes sense to build a common abstraction layer for Kubernetes, Docker swarm and Mesos.
  • There are expressed opinions that both native APIs and LCD APIs can co-exist.
    • Trove being an example of a service that doesn’t need everything a native API would give.
    • Migrate the workload from VM to container.
    • Support hybrid deployment (VMs & containers) of their application.
    • Bring containers (in Magnum bays) to a Heat template, and enable connections between containers and other OpenStack resources
    • Support containers to Horizon
    • Send container metrics to Ceilometer
    • Portable experience across container solutions.
    • Some people just want a container and don’t want the complexities of others (COEs, bays, baymodels, etc.)
  • Full thread

Delimiter, the Quota Management Library Proposal

  • At this point, there is a fair amount of objections to developing a service to manage quotas for all services. We will be discussing the development of a library that services will use to manage their own quotas with.
  • You don’t need a serializable isolation level. Just use a compare-and-update with retries strategy. This will prevent even multiple writers from oversubscribing any resource with an isolation level.
    • The “generation” field in the inventories table is what allows multiple writer to ensure a consistent view of the data without needing to rely on heavy lock-based semantics in relational database management systems.
  • Reservation doesn’t belong in quota library.
    • Reservations is concept of a time to claim of some resource.
    • Quota checking is returning whether a system right now can handle a request right now to claim a set of resources.
  • Key aspects of the Delimiter Library:
    • It’s a library, not a service.
    • Impose limits on resource consumptions.
    • Will not be responsible for rate limiting.
    • Will not maintain data for resources. Projects will take care of keeping/maintaining data for the resources and resource consumption.
    • Will not have a concept of reservations.
    • Will fetch project quota from respective projects.
    • Will take into consideration of a project being flat or nested.
  • Delimiter will rely on the concept of generation-id to guarantee sequencing. Generation-id gives a point in time view of resource usage in a project. Project consuming delimiter will need to provide this information while checking or consuming quota. At present Nova [3] has the concept of generation-id.
  • Full thread

Newton Release Management Communication

  • Volunteers filling PTL and liaison positions are responsible for ensuring communication between project teams happen smoothly.
  • Email, for announcements and asynchronous communication.
    • The release team will use the “[release]” topic tag in the openstack-dev mailing list.
    • Doug Hellmann will send countdown emails with weekly updates on:
      • focuses
      • tasks
      • important upcoming dates
    • Configure your mail clients accordingly so that these messages are visible.
  • IRC, for time-sensitive interactions.
    • You should have an IRC bouncer setup and made available in the #openstack-release channel on freenode. You should definitely be in there during deadline periods (the week before the week of each deadline).
  • Written documentation, for relatively stable information.
    • The release team has published the schedule for the Newton cycle [4].
    • If your project has something unique to add to the release schedule, send patches to the openstack/release repository.
  • Please ensure the release liaison for your project hasthe time and ability to handle the communication necessary to manage your release.
  • Our release milestones and deadlines are date-based, not feature-based. When the date passes, so does the milestone. If you miss it, you miss it. A few projects ran into problems during Mitaka because of missed communications.
  • Full thread

OpenStack Client Slowness

  • In profiling the nova help command, it was noticed there was a bit of time spent in the pkg_resource module and it’s use of pyparsing. Could we avoid a startup penalty by not having to startup a new python interpreter for each command we run?
    • In tracing Devstack today with a particular configuration, it was noticed that the openstack and neutron command run 140 times. If each one of those has a 1.5s overhead, we could potentially save 3 ½ minutes off Devstack execution time.
    • As a proof of concept Daniel Berrange created an openstack-server command which listens on a unix socket for requests and then invokes or OpenStackComputeShell.main or The nova, neutron and openstack commands would then call to this openstack-server command.
    • Devstack results without this tweak:
      • real 21m34.050s
      • user 7m8.649s
      • sys 1m57.865s
    • Destack results with this tweak:
      • real 17m47.059s
      • user 3m51.087s
      • sys 1m42.428s
  • Some notes from Dean Troyer for those who are interested in investigating this further:
    • OpenStack Client does not any project client until it’s actually needed to make a rest call.
    • Timing on a help command includes a complete scan of all entry points to generate the list of commands.
    • The –time option lists all REST calls that properly go through our TimingSession object. That should all of them unless a library doesn’t use the session it is given.
    • Interactive mode can be useful to get timing on just the setup/teardown process without actually running a command.
  • Full thread

Input Needed On Summit Discussion About Global Requirements

  • Co-installability of big tent project is a huge cost in energy spent. Service isolation with containers, virtual environments or different hosts allow avoiding having to solve this problem.
  • All-in-one installations today for example are supported because of development environments using Devstack.
  • Just like with the backwards compatibility library and client discussion, OpenStack service co-existing on the same host may share the same dependencies. Today we don’t guarantee things will work if you upgrade Nova to Newton and it upgrades shared client/libraries with Cinder service at Mitaka.
  • Devstack using virtual environments is pretty much already there. Due to operator feedback, this was stopped.
  • Traditional distributions rely on the community being mindful of shared dependency versions across services, so that it’s possible to use apt/yum tools to install OpenStack easily.
    • According to the 2016 OpenStack user survey, 56% of deployments are using “unmodified packages from the operating systems”. [4]
  • Other distributions are starting to support container-based packages where one version of a library at a time will go away.
    • Regardless the benefit of global requirements [5] will provide us a mechanism to encourage dependency convergence.
      • Limits knowledge required to operate OpenStack.
      • Facilitates contributors jumping from one code base to another.
      • Checkpoint for license checks.
      • Reduce overall security exposure by limiting code we rely on.
    • Some feel this is a regression to the days of not having reliable packaging management. Containers could be lagging/missing critical security patches for example.
  • Full thread



OpenStack Developer Mailing List Digest April 2-8

SuccessBot Says

  • Ttx: Design Summit placeholder sessions pushed to the Austin official schedule.
  • Pabelanger: Launched our first ubuntu-xenial job with node pool!
  • Mriedem: Flavors are now in the Nova API database.
  • sridhar_ram: First official release of Tacker 0.3.0 for Mitaka is released!
  • Dhellmann: we have declared Mitaka released, congratulations everyone!
  • Tristanc: 54 PTL and 7 TC members elected for Newton.
  • Ajaeger: is ready for Mitaka – including new manuals and links to release notes.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

Mitaka Release Is Out!

  • Great work everyone!
  • Read more about our 13th release! [1]
  • See release notes from projects for new features, bug fixes, upgrade notes. [2]

Recently Accepted API-WG Guidelines

  • Version discover guideline for API microversions [3]
  • Client interaction guideline for API microversions [4]
  • Versioning guideline for API microversions [5]
  • Unexpected attribute guideline [6]
  • Full thread

Results of the Technical Committee Election

  • Davanum Srinivas (dims)
  • Flavio Percoco (flaper87)
  • John Garbutt (johnthetubaguy)
  • Matthew Treinish (mtreinish)
  • Mike Perez (thingee)
  • Morgan Fainberg (morgan)/(notmorgan)
  • Thierry Carrez (ttx)
  • Full results [7]
  • Full thread

Cross-Project Session Schedule

  • Schedule posted [8].
  • If there’s a session you’re interested in, but can’t attend because of conflicting reasons, consider getting the conversation going early on the OpenStack Developer mailing list.
  • Full thread

OpenStack Developer Mailing List Digest March 26 – April 1

SuccessBot Says

  • Tonyb: Dims fixed the Routes 2.3 API break :)
  • pabelanger: migration from devstack-trusty to ubuntu-trusty complete!
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

Voting for the Technical Committee Election Is Now Open

  • We are selecting 7 TC members.
  • Confirmed candidates [1]
  • You are eligible to vote if you are a Foundation individual member [2] that also committed to one of the official projects [3] during the Liberty and Mitaka development.
  • Important dates:
    • Election open: 2015-04-01 00:00 UTC
    • Election close: 2015-04-07 23:59 UTC
  • More details on the election [4]
  • Full thread

Release Process Changes For Official Projects

  • The release team worked on automation for tagging and documenting [5] focusing on the projects with the release:managed tag.
  • Second phase is to expand to all projects.
  • The release team will be updating gerrit ACLs for projects to ensure they can handle releases and branching.
  • Instead of tagging releases and then recording them in the release repository, all official teams can use the release repo to request new releases.
  • If you’re not familiar with the release process, review the README file in the openstack/releases repo [6].
  • Full thread

Service Catalog TNG Work in Mitaka … Next Steps

  • Mitaka included fact finding
  • public / admin / internal url
    • Notion of an internal url is used in many deployments because there is a belief it means there is no change for data transfer.
    • Some deployments make these all the same and use the network to ensure that internal connections hit internal interfaces.
    • Next steps:
      • We need a set of user stories built from what we currently have.
  • project_id optional in projects – good progress
    • project_id is hard coded into many urls for projects without any useful reason.
    • Nova demonstrated removing this in micro version 2.18.
    • A patch [7] is up for devstack to enable this.
    • Next steps:
      • Get other projects to remove project_id from their urls.
  • Service types authority
    • We agreed we needed a place to recognize service types [8].
    • The assumption that there might be a single URL which describes an API for a service is not an assumption we fulfill even for most services.
    • This bump led to [9] some shifted effort on API reference to RST work.
    • Next steps:
      • Finish API documentation conversion work.
      • Review patches for service type authority repo [10]
  • Service catalog TNG Schema
    • We have some early work setting up a schema based on the known knowns, and leaving some holes for the known unknowns until we get a few of these locked down (types / allowed urls).
    • Next steps:
      • Review current schema.
  • Weekly Meetings
    • The team has been meeting weekly in #openstack-meeting-cp until release crunch and people got swamped.
    • The meeting will be on hiatus for now until after Austin summit, and then start back up after the week of getting back.
  • Full thread

Oh Swagger, Where Art Thou?

  • Previously it has been communicated of the move from WADL to Swagger for API reference information.
  • It has been discovered that Swagger doesn’t match all of our current API designs.
  • There is a compute server reference documentation patch [11] using Sphinx, RST to do a near copy of the API reference page.
    • There is consensus with Nova-API team, API working group and others to go forward with this.
  • We can still find uses for Swagger for some projects that match the specification really well.
  • Swagger for example doesn’t support
    • Showing the changes between micro
    • Projects that have /actions resource allow multiple differing request bodies.
  • A new plan is coming, but for now the API reference and WADL files will remain in the api-site repository.
  • There will be a specification and presentation in the upstream contributor’s track about Swagger as a standard [12].
  • Full thread

Cross-Project Summit Session Proposals Due

The Plan For the Final Week of the Mitaka Release

  • We are approaching the final week of Mitaka release cycle.
  • Important dates:
    • March 31st was the final day for requesting release candidates for projects following the milestone release model.
    • April 1st is the last day requesting full releases for service projects following the intermediary release model.
    • April 7th the release team will tag the most recent release candidate for each milestone.
    • The release team will reject or postpone requests for new library releases and new service release candidates by default.
    • Only truly critical bug fixes which cannot be fixed post-release will be determined by the release team.
  • Full thread

[1] –

[2] –

[3] –

[4] –

[5] –

[6] –

[7] –

[8] –

[9] –

[10] –

[11] –

[12] –

[13] –

OpenStack Developer Mailing List Digest March 19-25

SuccessBot Says

  • redrobot: The Barbican API guides is now being published. [1]
  • jroll: ironic 5.1.0 released as the basis for stable/mitaka.
  • ttx: All RC1s up for milestones-driven projects.
  • zara: sends emails now!
  • noggin143: my first bays running on CERN production cloud with Magnum.
  • sdague: Grenade upgraded to testing stable/liberty -> stable/mitaka and stable/mitaka -> master.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

PTL Election Conclusion and Results

  • Results are in, congrats to everyone! [2]
  • Appointed PTLs by the TC for leaderless Projects [3]:
    • EC-API: Alex Andrelevine
    • Stable Branch Maintenance: Tony Breeds
    • Winstackers: Claudiu Belu
  • Full thread

Candidate Proposals for Technical Committee Positions Are Now Open

  • Important dates:
    • Nominations open: 2016-03-25 00:00 UTC
    • Nominations close: 2016-03-31 23:59 UTC
    • Election open: 2015-04-01 00:00 UTC
    • Election close: 2015-04-07 23:59 UTC
  • More details on the election [4]
  • Full thread

Release countdown for week R-1, Mar 27 – Apr 1

  • Focus:
    • Project teams following the cycle-with-milestone model should be testing their release Candidates.
    • Project teams following the cycle-with-intermediary model should have at least one Mitaka release and determine if another release is needed before the end of the Mitaka cycle.
    • All projects should be working on release-critical-bugs.
  • General Notes:
    • Global-requirements list is still frozen.
    • If you need to change a dependency for release-critical-bug fix, provide enough details in the change request.
    • Master branches for all projects following cycle-with-milestone are open for Newton development work.
  • Release Actions:
    • Projects following cycle-with-intermediary without clear indication of cutting their final release:
      • bifrost
      • magnum
      • python-searchlightclient
      • senlin-dashboard
      • solum-infra-guestagent
      • os-win
      • cloudkitty
      • tacker
    • These projects should contact the release team or submit a release request to the releases repository as soon as possible. Please submit a request by Wednesday or Thursday at the latest.
      • After March 31st, feature releases will be counted as part of Newton cycle.
    • The release team will have reduced availability between R1 and summit due to travel. Use the dev mailing list to contact the team and include “[release]” in the subject.
  • Full thread

Bots and Their Effects: Gerrit, IRC, other

  • Bots are very handy for doing repetitive tasks.
  • These require require permissions to execute certain actions, require maintenance to ensure they operate as expected and do create output which is music to some and noise to others
  • From an infra meeting [5], this is what has been raised so far:
    • Permissions: having a bot on gerrit with +2 +A is something we would like to avoid
    • “unsanctioned” bots (bots not in infra config files) in channels shared by multiple teams (meeting channels, the -dev channel)
    • Forming a dependence on bots and expecting infra to maintain them ex post facto (example: bot soren maintained until soren didn’t)
    • Causing irritation for others due to the presence of an echoing bot which eventually infra will be asked or expected to mediate
    • Duplication of features, both meetbot and purplebot log channels and host the archives in different locations
    • Canonical bot doesn’t get maintained
  • It’s possible bots that infra currently maintains have features that folks are unaware of.
  • Bots that +2 reviews and approve them can be a problem when taking into account of schedules, outages, gate issues, etc.
  • The Success bot for example is and added feature that takes advantage of the already existing status bot.
  • What are the reasons that people end up writing their own bots instead of contributing to the existing infrastructure bots when applicable?
  • Full thread

Semantic Version On Master Branches After Release Candidates

  • The release team assumes three options someone would choose when installing things:
    • Tagged versions from any branch.
      • Clear, and always produces deployments that are reproduceable, with versions distinct and increasing over time.
    • Untagged versions on a stable branch.
    • Untagged versions on the master branch
      • Options 2 and 3 are something around release cycle boundaries.
      • Produce the same version numbers in different branches for a short period of time.
      • The release team felt it was extremely unlikely that anyone would mix option 2 and 3, because that will make upgrades difficult.
  • Some distributions want to package things that are not tagged as releasable by contributors.
    • Consumers
      • They are in their development cycles and want/need to keep up with trunk throughout the whole cycle.
      • A lot of changes are introduced in a cycle with new features, deprecations, removals, non-backwards compatibility etc. With these continually provided up-to-date packages, they are able to test them right away.
    • It’s a lot of work to package things, and distributions want to do it quickly.
      • If distributions started packaging OpenStack only when the official stable release would be out, it would take distributions several weeks/months to get a stable package out.
      • Projects that use packages to deploy are then delayed for their own release to test these packages their consuming. (e.g. TripleO, Packstack, Kolla, Puppet-OpenStack).
  • Full thread

Our Install Guides Only Cover Defcore – What About Big Tent?

  • Until recently, projects like Manila [6] and Magnum have been accepted in the install guides, but we’re having issues initially because they aren’t considered by the defcore working group.
    • With expansion of projects coming from big tent, the documentation team has projects requesting their install documentation to be accepted.
    • The documentation team today maintain and verifies the install documentation for each release can be a lot of work with the already accepted OpenStack projects.
  • Goals:
    • Make install guides easy to contribute for projects in the big tent.
    • Not end up having the documentation team maintain all projects install documentation.
    • As an operator, I should be able to easily discover install documentation for projects in the big tent.
    • With accessible install documentation projects can hopefully have:
      • Improved adoption
      • More stable work from bug reports with people actually able to install and test the project.
  • Proposal: Install documentation can live in a project’s repository so they can maintain and update.
    • Have all these documentation sources rendered to one location for easy discoverability .
  • Full thread

Technical Committee Highlights March 21, 2016

Long time, no see!

Poppy and our Open Core discussion

The Poppy team applied to add the project under OpenStack governance. Poppy, for those of you not familiar with it, provides CDN as a service. It’s a provisioning service – like other projects in OpenStack, such as Nova – but for CDNs. The overall proposal seemed to be fine except for one thing, there are no open source solutions for CDNs. This means Poppy provisions CDNs based on other commercial services and it requires consumers of Poppy to have an account in one of those CDN services to be able to use it.This presents several issues from an OpenStack perspective. One of them is the one mentioned before, which is that using Poppy requires clouds to rely on other CDNs. Another issue is that there is not good way to test the service in OpenStacks gates as there’s no open source solution for it. The OpenStack infra team won’t be subscribing to any of those CDN services for testing Poppy and nor is the Poppy team either.

There were quite a few discussions on this topic and the TC voted on whether the open core “issue” was critical enough to allow or reject Poppy from the big tent. In the review, there are different points of views on whether Poppy is actually Open Core or not and whether it should be allowed into OpenStack’s big tent regardless of the lack of an open source CDN solution. Ultimately the TC decided to reject the Poppy proposal in a close vote, 7-6.

Mission statement, take 2

As Russell Bryant puts it well in this Foundation mailing list thread, the OpenStack mission statement has held up pretty well for the life of the project. Discussions started about updates to ensure we include some key themes as focus areas for our growing community: interoperability and end users needs. The OpenStack technical committee has created an iteration on the mission statement, and the board is discussing as well. Take a look at the revisions so that our modifications can get buy-in across the community.

New projects

The OpenStack big tent welcomes the following official project teams:

  • Dragonflow, a distributed control plane implementation of Neutron that implements advanced networking services driven by the OpenStack Networking API.
  • Kuryr, a bridge between container framework networking models and the OpenStack networking abstraction.
  • Tacker, a lifecycle management tool providing Network Function Virtualization (NFV) Orchestration services and libraries.
  • EC2API, provides an EC2-compatible API for accessing OpenStack features.

New tag: stable:follows-policy

This new tag allows for indicating which deliverables follow the stable policy. The existing `release:has-stable-branches` tag that had been used so far ended up only describing if a deliverable has a branch called “stable/something”, and therefore did not properly indicate that stable policies are being followed. The new tag aims to cover that area and should eventually completely supersede the existing tag. You can read more about this tag in the tag reference page.


This blog post was co-authored by Flavio Percoco and Thierry Carrez.