OpenStack Developer Mailing List Digest May 7-13

SuccessBot Says

  • Pabelanger: bare-precise has been replaced by ubuntu-precise. Long live DIB
  • bknudson: The Keystone CLI is finally gone. Long live openstack CLI.
  • Jrichli: swift just merged a large effort that started over a year ago that will facilitate new capabilities – like encryption
  • All

Release Count Down for Week R-20, May 16-20

  • Focus
    • Teams should have published summaries from summit sessions to the openstack-dev mailing list.
    • Spec writing
    • Review priority features
  • General notes
    • Release announcement emails will be tagged with ‘new’ instead of ‘release’.
    • Release cycle model tags now say explicitly that the release team manages releases.
  • Release actions
    • Release liaisons should add their name and contact information to this list [1].
    • New liaisons should understand release instructions [2].
    • Project teams that want to change their release model should do so before the first milestone in R-18.
  • Important dates
    • Newton 1 milestone: R-18 June 2
    • Newton release schedule [3]

Collecting Our Wiki Use Cases

  • At the beginning, the community has been using a wiki [4] as a default community information publication platform.
  • There’s a struggle with:
    • Keeping things up-to-date.
    • Prevent from being vandalized.
    • Old processes.
    • Projects that no longer exist.
  • This outdated information can make it confusing to use, especially newcomers, that search engines will provides references to.
  • Various efforts have happened to push information out of the wiki to proper documentation guides like:
    • Infrastructure guide [5]
    • Project team guide [6]
  • Peer reviewed reference websites:
  • There are a lot of use cases that a wiki is a good solution, and we’ll likely need a lightweight publication platform like the wiki to cover those use cases.
  • If you use the wiki as part of your OpenStack work, make sure it’s captured in this etherpad [9].
  • Full thread

Supporting Go (continued)

  • Continuing from previous Dev Digest [10].
  • Before Go 1.5 (without the -buildmode=shared) it didn’t support the concept of shared libraries. As a consequence, when a library upgrades, the release team has to trigger rebuild for each and every reverse dependency.
  • In Swift’s case for looking at Go, it’s hard to write a network service in Python that shuffles data between the network and a block device and effectively use all the hardware available.
    • Fork()’ing child processes using cooperative concurrency via eventlet has worked well, but managing all async operations across many cores and many drives is really hard. There’s not an efficient interface in Python. We’re talking about efficient tools for the job at hand.
    • Eventlet, asyncio or anything else single threaded will have the same problem of the filesystem syscalls taking a long time and the call thread can be blocked. For example:
      • Call select()/epoll() to wait for something to happen with many file descriptors.
      • For each ready file descriptor, if the file descriptor socket is readable, read it, otherwise EWOULDBLOCK is returned by the kernel, and move on to the next file descriptor.
  • Designate team explains their reasons for Go:
    • MiniDNS is a component that due to the way it works, it’s difficult to make major improvements.
    • The component takes data and sends a zone transfer every time a record set gets updated. That is a full (AXFR) zone transfer where every record in a zone gets sent to each DNS server that end users can hit.
      • There is a DNS standard for incremental change, but it’s complex to implement, and can often end up reverting to a full zone transfer.
    • Ns[1-6] may be tens or hundreds of servers behind anycast Ips and load balancers.
    • Internal or external zones can be quite large. Think 200-300Mb.
    • A zone can have high traffic where a record is added/removed for each boot/destroy.
    • The Designate team is small, and after looking at options, judging the amount of developer hours available, a different language was decided.
  • Looking at Designates implementation, there are some low-hanging fruit improvements that can be made:
    • Stop spawning a thread per request.
    • Stop instantiating Oslo config object per request.
    • Avoid 3 round trips to the database every request. Majority of the request here is not spent in Python. This data should be trivial to cache since Designate knows when to invalidate the cache data.
      • In a real world use case, there could be a cache miss due to the shuffle order of multiple miniDNS servers.
  • The Designate team saw 10x improvement for 2000 record AXFR (without caching). Caching would probably speed up the Go implementation as well.
  • Go historically has poor performance with multiple cores [11].
    • Main advantages with the language could be CSP model.
    • Twisted does this very well, but we as a community consistently support eventlet. Eventlet has threaded programming model, which is poorly suited for Swift’s case.
    • PyPy got a 40% performance improvement over Cpython for a brenchmark of Twisted’s DNS component 6 years ago [12].
  • Right now our stack already has dependency C, Python, Erlang, Java, Shell, etc.
  • End users emphatically do not care about the language API servers were written in. They want stability, performance and features.
  • The Infrastructure related issues with Go for reliable builds, packaging, etc is being figured out [13]
  • Swift has tested running under PyPy with some conclusions:
    • Assuming production-ready stability of PyPy and OpenStack, everyone should use PyPy over CPython.
      • It’s just simply faster.
      • There are some garbage collector related issues to still work out in Swift’s usage.
      • There are a few patches that do a better job of socket handling in Swift that runs better under PyPy.
    • PyPy only helps when you’ve got a CPU-constrained environment.
    • The GoLang targets in Swift are related to effective thread management syscalls, and IO.
    • See a talk from the Austin Conference about this work [14].
  • Full thread


OpenStack Developer Mailing List Digest April 23 – May 6

Success Bot Says

  • Sdague: nova-network is deprecated [1]
  • Ajaeger: OpenStack content on Transifex has been removed, Zanata on has proven to be stable platform for all translators and thus Transifex is not needed anymore.
  • All

Backwards Compatibility Follow-up

  • Agreements from recent backwards compatibility for clients and libraries session:
    • Clients need to talk to all versions of OpenStack. Clouds.
    • Oslo libraries already do need to do backwards compatibility.
    • Some fraction of our deploys between 1% to 50% are trying to do in place upgrades where for example Nova is upgrade, and Neutron later. But now Neutron has to work with the upgraded libraries from the Nova upgrade.
  • Should we support in-place upgrades? If we do, we need at least 1 or more versions of compatibility where Mitaka Nova can run Newton Oslo+client libraries.
    • If we don’t support in-place upgrades, deployment methods must be architected to avoid ever encountering where a client or one of N services is going to be upgraded on a single python environment. All clients and services must be upgraded together on a single python environment or none.
  • If we decide to support in-place upgrades, we need to figure out how to test that effectively; its a linear growth with the number of stable releases we choose to support.
  • If we decide not to, we have no further requirement to have any cross-over compatibility between OpenStack releases.
  • We still have to be backwards compatible on individual changes.
  • Full thread

Installation Guide Plans for Newton

  • Continuing from a previous Dev Digest [2], big tent is growing and our documentation team would like for projects to maintain their own installation documentation. This should be done while still providing quality in valid working installation information and consistency the team strives for.
  • The installation guide team held a session at the summit that was packed and walked away with some solid goals to achieve for Newton.
  • Two issues being discussed:
    • What to do with the existing install guide.
    • Create a way for projects to write installation documentation in their own repository.
  • All guides will be rendered from individual repositories and appear in
  • The Documentation team has recommendations for projects writing their install guides:
    • Build on existing install guide architecture, so there is no reinventing the wheel.
    • Follow documentation conventions [3].
    • Use the same theme called openstackdocstheme.
    • Use the same distributions as the install guide does. Installation from source is an alternative.
    • Guides should be versioned.
    • RST is the preferred documentation format. RST is also easy for translations.
    • Common naming scheme: “X Service Install Guide” – where X is your service name.
  • The chosen URL format is
  • Plenty of work items to follow [4] and volunteers are welcome!
  • Full thread

Proposed Revision To Magnum’s Mission

  • From a summit discussion, there was a proposed revision to Magnum’s mission statement [5].
  • The idea is to narrow the scope of Magnum to allow the team to focus on making popular container orchestration engines (COE) software work great with OpenStack. Allowing users to setup fleets of cloud capacity managed by COE’s such as Swarm, Kubernetes, Mesos, etc.
  • Deprecate /containers resource from Magnum’s API. Any new project may take on the goal of creating an API service that abstracts one or more COE’s.
  • Full thread

Supporting the Go Programming Language

  • The Swift community has a git branch feature/hummingbird that contains some parts of Swift reimplemented in Go. [6]
  • The goal is to have a reasonably read-to-merge feature branch ready by the Barcelona summit. Shortly after the summit, the plan is to merge the Go code into master.
  • An amended Technical Committee resolution will follow to suggest Go as a supported language in OpenStack projects [7].
  • Some Technical Committee members have expressed wanting to see technical benefits that outweigh the community fragmentation and increase in infrastructure tasks that result from adding that language.
  • Some open questions:
    • How do we run unit tests?
    • How do we provide code coverage?
    • How do we manage dependencies?
    • How do we build source packages?
    • Should we build binary packages in some format?
    • How to manage in tree documentation?
    • How do we handle log and message string translations?
    • How will DevStack install the project as part of a gate job?
  • Designate is also looking into moving a single component into Go.
    • It would be good to have two cases to help avoid baking any project specific assumptions into testing and building interfaces.
  • Full thread

Release Countdown for Week R-21, May 9-13

  • Focus
    • Teams should be focusing on wrapping up incomplete work left over from the end of the Mitaka cycle.
    • Announce plans from the summit.
    • Completing specs and blueprints.
  • General Notes
    • Project teams that want to change their release model tag should do so before the Newton-1 milestone. This can be done by submitting a patch to governance repository in the projects.yaml file.
    • Release announcement emails are being proposed to have their tag switched from “release” to “newrel” [8].
  • Release Actions
    • Release liaisons should add their name to and contact information to this list [9].
    • Release liaisons should have their IRC clients join #openstack-release.
  • Important Dates
    • Newton 1 Milestone: R-18 June 2nd
    • Newton release schedule [10]
  • Full thread

Discussion of Image Building in Trove

  • A common question the Trove team receives from new users is how and where to get guest images to experiment with Trove.
    • Documentation exists in multiple places for this today [11][12], but things can still be improved.
  • Trove has a spec proposal [13] for using libguestfs approach to building images instead of using the current diskimage-builder (DIB).
    • All alternatives should be equivalent and interchangable.
    • Trove already has elements for all supported databases using DIB, but these elements are not packaged for customer use. Doing this would be a small effort of providing an element to install the guest agent software from a fixed location.
    • We should understand the deficiencies if any in DIBof switching tool chains. This can be be based on Trove and Sahara’s experiences.
  • The OpenStack Infrastructure team has been using DIB successfully for a while as it is a flexible tool.
    • By default Nova disables file injection [14]
    • DevStack doesn’t allow you to enable Nova file injection, and hard sets it off [15].
    • Allows to bootstrap with yum of debootstrap
    • Pick the filesystem for an existing image.
  • Lets fix the problems with DIB that Trove is having and avoid reinventing the wheel.
  • What are the problems with DIB, and how do they prevent Trove/Sahara users from building images today?
    • Libguestfs manipulates images in a clean helper VM created by libguestfs in a predictable way.
      • Isolation is something DIB gives up in order to provide speed/lower resource usage.
    • In-place image manipulation can occur (package installs, configuration declarations) without uncompressing or recompressing an entire image.
      • It’s trivial to make a DIB element which modifies an existing image and making it in-place.
    • DIB scripts’ configuration settings passed in freeform environment variables can be difficult to understand document for new users. Libguestfs demands more formal formal parameter passing.
    • Ease of “just give me an image. I don’t care about twiddling knobs”.
      • OpenStack Infra team already has a wrapper for this [16].
  • Sahara has support for several image generation-related cases:
    • Packing an image pre-cluster spawn in Nova.
    • Building clusters from a “clean” operating system image post-Nova spawn.
    • Validating images after Nova spawn.
  • In a Sahara summit session, there was a discussed plan to use libguestfs rather than DIB with an intent to define a linear, idempotent set of steps to package images for any plugin.
  • Having two sets of image building code to maintain would be a huge downside.
  • What’s stopping us a few releases down the line deciding that libguestfs doesn’t perform well and we decide on a new tool? Since DIB is an OpenStack project, Trove should consider support a standard way of building images.
  • Trove summit discussion resulted in agreement of advancing the image builder by making it easier to build guest images leveraging DIB.
    • Project repository proposals have been made [17][18]
  • Full thread


OpenStack Developer Mailing List Digest April 9-22

Success Bot Says

  • Clarkb: infra team redeployed Gerrit on a new larger server. Should serve reviews with fewer 500 errors.
  • danpb: wooohoooo, finally booted a real VM using nova + os-vif + openvswitch + privsep
  • neiljerram: Neutron routed networks spec was merged today; great job Carl + everyone else who contributed!
  • Sigmavirus24: Hacking 0.11.0 is the first release of the project in over a year.
  • Stevemar: dtroyer just released openstackclient 2.4.0 – now with more network commands \o/
  • odyssey4me: OpenStack-Ansible Mitaka 13.0.1 has been released!
  • All

One Platform – Containers/Bare Metal?

  • From the unofficial board meeting [1], an interest topic came up of how to truly support containers and bare metal under a common API with virtual machines.
  • We want to underscore how OpenStack has an advantage by being able to provide both virtual machines and bare metal as two different resources, when the “but the cloud should sentiment arises.
  • The discussion around “supporting containers” was different and was not about Nova providing them.
    • Instead work with communities on making OpenStack the best place to run things like Kubernetes and Docker swarm.
  • We want to be supportive of bare metal and containers, but the way we want to be supportive is different for
  • In the past, a common compute API was contemplated for Magnum, however, it was understood that the API would result in the lowest common denominator of all compute types, and exceedingly complex interface.
    • Projects like Trove that want to offer these compute choices without adding complexity within their own project can utilize solutions with Nova in deploying virtual machines, bare metal and containers (libvirt-lxc).
  • Magnum will be having a summit session [2] to discuss if it makes sense to build a common abstraction layer for Kubernetes, Docker swarm and Mesos.
  • There are expressed opinions that both native APIs and LCD APIs can co-exist.
    • Trove being an example of a service that doesn’t need everything a native API would give.
    • Migrate the workload from VM to container.
    • Support hybrid deployment (VMs & containers) of their application.
    • Bring containers (in Magnum bays) to a Heat template, and enable connections between containers and other OpenStack resources
    • Support containers to Horizon
    • Send container metrics to Ceilometer
    • Portable experience across container solutions.
    • Some people just want a container and don’t want the complexities of others (COEs, bays, baymodels, etc.)
  • Full thread

Delimiter, the Quota Management Library Proposal

  • At this point, there is a fair amount of objections to developing a service to manage quotas for all services. We will be discussing the development of a library that services will use to manage their own quotas with.
  • You don’t need a serializable isolation level. Just use a compare-and-update with retries strategy. This will prevent even multiple writers from oversubscribing any resource with an isolation level.
    • The “generation” field in the inventories table is what allows multiple writer to ensure a consistent view of the data without needing to rely on heavy lock-based semantics in relational database management systems.
  • Reservation doesn’t belong in quota library.
    • Reservations is concept of a time to claim of some resource.
    • Quota checking is returning whether a system right now can handle a request right now to claim a set of resources.
  • Key aspects of the Delimiter Library:
    • It’s a library, not a service.
    • Impose limits on resource consumptions.
    • Will not be responsible for rate limiting.
    • Will not maintain data for resources. Projects will take care of keeping/maintaining data for the resources and resource consumption.
    • Will not have a concept of reservations.
    • Will fetch project quota from respective projects.
    • Will take into consideration of a project being flat or nested.
  • Delimiter will rely on the concept of generation-id to guarantee sequencing. Generation-id gives a point in time view of resource usage in a project. Project consuming delimiter will need to provide this information while checking or consuming quota. At present Nova [3] has the concept of generation-id.
  • Full thread

Newton Release Management Communication

  • Volunteers filling PTL and liaison positions are responsible for ensuring communication between project teams happen smoothly.
  • Email, for announcements and asynchronous communication.
    • The release team will use the “[release]” topic tag in the openstack-dev mailing list.
    • Doug Hellmann will send countdown emails with weekly updates on:
      • focuses
      • tasks
      • important upcoming dates
    • Configure your mail clients accordingly so that these messages are visible.
  • IRC, for time-sensitive interactions.
    • You should have an IRC bouncer setup and made available in the #openstack-release channel on freenode. You should definitely be in there during deadline periods (the week before the week of each deadline).
  • Written documentation, for relatively stable information.
    • The release team has published the schedule for the Newton cycle [4].
    • If your project has something unique to add to the release schedule, send patches to the openstack/release repository.
  • Please ensure the release liaison for your project hasthe time and ability to handle the communication necessary to manage your release.
  • Our release milestones and deadlines are date-based, not feature-based. When the date passes, so does the milestone. If you miss it, you miss it. A few projects ran into problems during Mitaka because of missed communications.
  • Full thread

OpenStack Client Slowness

  • In profiling the nova help command, it was noticed there was a bit of time spent in the pkg_resource module and it’s use of pyparsing. Could we avoid a startup penalty by not having to startup a new python interpreter for each command we run?
    • In tracing Devstack today with a particular configuration, it was noticed that the openstack and neutron command run 140 times. If each one of those has a 1.5s overhead, we could potentially save 3 ½ minutes off Devstack execution time.
    • As a proof of concept Daniel Berrange created an openstack-server command which listens on a unix socket for requests and then invokes or OpenStackComputeShell.main or The nova, neutron and openstack commands would then call to this openstack-server command.
    • Devstack results without this tweak:
      • real 21m34.050s
      • user 7m8.649s
      • sys 1m57.865s
    • Destack results with this tweak:
      • real 17m47.059s
      • user 3m51.087s
      • sys 1m42.428s
  • Some notes from Dean Troyer for those who are interested in investigating this further:
    • OpenStack Client does not any project client until it’s actually needed to make a rest call.
    • Timing on a help command includes a complete scan of all entry points to generate the list of commands.
    • The –time option lists all REST calls that properly go through our TimingSession object. That should all of them unless a library doesn’t use the session it is given.
    • Interactive mode can be useful to get timing on just the setup/teardown process without actually running a command.
  • Full thread

Input Needed On Summit Discussion About Global Requirements

  • Co-installability of big tent project is a huge cost in energy spent. Service isolation with containers, virtual environments or different hosts allow avoiding having to solve this problem.
  • All-in-one installations today for example are supported because of development environments using Devstack.
  • Just like with the backwards compatibility library and client discussion, OpenStack service co-existing on the same host may share the same dependencies. Today we don’t guarantee things will work if you upgrade Nova to Newton and it upgrades shared client/libraries with Cinder service at Mitaka.
  • Devstack using virtual environments is pretty much already there. Due to operator feedback, this was stopped.
  • Traditional distributions rely on the community being mindful of shared dependency versions across services, so that it’s possible to use apt/yum tools to install OpenStack easily.
    • According to the 2016 OpenStack user survey, 56% of deployments are using “unmodified packages from the operating systems”. [4]
  • Other distributions are starting to support container-based packages where one version of a library at a time will go away.
    • Regardless the benefit of global requirements [5] will provide us a mechanism to encourage dependency convergence.
      • Limits knowledge required to operate OpenStack.
      • Facilitates contributors jumping from one code base to another.
      • Checkpoint for license checks.
      • Reduce overall security exposure by limiting code we rely on.
    • Some feel this is a regression to the days of not having reliable packaging management. Containers could be lagging/missing critical security patches for example.
  • Full thread



OpenStack Developer Mailing List Digest April 2-8

SuccessBot Says

  • Ttx: Design Summit placeholder sessions pushed to the Austin official schedule.
  • Pabelanger: Launched our first ubuntu-xenial job with node pool!
  • Mriedem: Flavors are now in the Nova API database.
  • sridhar_ram: First official release of Tacker 0.3.0 for Mitaka is released!
  • Dhellmann: we have declared Mitaka released, congratulations everyone!
  • Tristanc: 54 PTL and 7 TC members elected for Newton.
  • Ajaeger: is ready for Mitaka – including new manuals and links to release notes.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

Mitaka Release Is Out!

  • Great work everyone!
  • Read more about our 13th release! [1]
  • See release notes from projects for new features, bug fixes, upgrade notes. [2]

Recently Accepted API-WG Guidelines

  • Version discover guideline for API microversions [3]
  • Client interaction guideline for API microversions [4]
  • Versioning guideline for API microversions [5]
  • Unexpected attribute guideline [6]
  • Full thread

Results of the Technical Committee Election

  • Davanum Srinivas (dims)
  • Flavio Percoco (flaper87)
  • John Garbutt (johnthetubaguy)
  • Matthew Treinish (mtreinish)
  • Mike Perez (thingee)
  • Morgan Fainberg (morgan)/(notmorgan)
  • Thierry Carrez (ttx)
  • Full results [7]
  • Full thread

Cross-Project Session Schedule

  • Schedule posted [8].
  • If there’s a session you’re interested in, but can’t attend because of conflicting reasons, consider getting the conversation going early on the OpenStack Developer mailing list.
  • Full thread

OpenStack Developer Mailing List Digest March 26 – April 1

SuccessBot Says

  • Tonyb: Dims fixed the Routes 2.3 API break 🙂
  • pabelanger: migration from devstack-trusty to ubuntu-trusty complete!
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

Voting for the Technical Committee Election Is Now Open

  • We are selecting 7 TC members.
  • Confirmed candidates [1]
  • You are eligible to vote if you are a Foundation individual member [2] that also committed to one of the official projects [3] during the Liberty and Mitaka development.
  • Important dates:
    • Election open: 2015-04-01 00:00 UTC
    • Election close: 2015-04-07 23:59 UTC
  • More details on the election [4]
  • Full thread

Release Process Changes For Official Projects

  • The release team worked on automation for tagging and documenting [5] focusing on the projects with the release:managed tag.
  • Second phase is to expand to all projects.
  • The release team will be updating gerrit ACLs for projects to ensure they can handle releases and branching.
  • Instead of tagging releases and then recording them in the release repository, all official teams can use the release repo to request new releases.
  • If you’re not familiar with the release process, review the README file in the openstack/releases repo [6].
  • Full thread

Service Catalog TNG Work in Mitaka … Next Steps

  • Mitaka included fact finding
  • public / admin / internal url
    • Notion of an internal url is used in many deployments because there is a belief it means there is no change for data transfer.
    • Some deployments make these all the same and use the network to ensure that internal connections hit internal interfaces.
    • Next steps:
      • We need a set of user stories built from what we currently have.
  • project_id optional in projects – good progress
    • project_id is hard coded into many urls for projects without any useful reason.
    • Nova demonstrated removing this in micro version 2.18.
    • A patch [7] is up for devstack to enable this.
    • Next steps:
      • Get other projects to remove project_id from their urls.
  • Service types authority
    • We agreed we needed a place to recognize service types [8].
    • The assumption that there might be a single URL which describes an API for a service is not an assumption we fulfill even for most services.
    • This bump led to [9] some shifted effort on API reference to RST work.
    • Next steps:
      • Finish API documentation conversion work.
      • Review patches for service type authority repo [10]
  • Service catalog TNG Schema
    • We have some early work setting up a schema based on the known knowns, and leaving some holes for the known unknowns until we get a few of these locked down (types / allowed urls).
    • Next steps:
      • Review current schema.
  • Weekly Meetings
    • The team has been meeting weekly in #openstack-meeting-cp until release crunch and people got swamped.
    • The meeting will be on hiatus for now until after Austin summit, and then start back up after the week of getting back.
  • Full thread

Oh Swagger, Where Art Thou?

  • Previously it has been communicated of the move from WADL to Swagger for API reference information.
  • It has been discovered that Swagger doesn’t match all of our current API designs.
  • There is a compute server reference documentation patch [11] using Sphinx, RST to do a near copy of the API reference page.
    • There is consensus with Nova-API team, API working group and others to go forward with this.
  • We can still find uses for Swagger for some projects that match the specification really well.
  • Swagger for example doesn’t support
    • Showing the changes between micro
    • Projects that have /actions resource allow multiple differing request bodies.
  • A new plan is coming, but for now the API reference and WADL files will remain in the api-site repository.
  • There will be a specification and presentation in the upstream contributor’s track about Swagger as a standard [12].
  • Full thread

Cross-Project Summit Session Proposals Due

The Plan For the Final Week of the Mitaka Release

  • We are approaching the final week of Mitaka release cycle.
  • Important dates:
    • March 31st was the final day for requesting release candidates for projects following the milestone release model.
    • April 1st is the last day requesting full releases for service projects following the intermediary release model.
    • April 7th the release team will tag the most recent release candidate for each milestone.
    • The release team will reject or postpone requests for new library releases and new service release candidates by default.
    • Only truly critical bug fixes which cannot be fixed post-release will be determined by the release team.
  • Full thread

[1] –

[2] –

[3] –

[4] –

[5] –

[6] –

[7] –

[8] –

[9] –

[10] –

[11] –

[12] –

[13] –

OpenStack Developer Mailing List Digest March 19-25

SuccessBot Says

  • redrobot: The Barbican API guides is now being published. [1]
  • jroll: ironic 5.1.0 released as the basis for stable/mitaka.
  • ttx: All RC1s up for milestones-driven projects.
  • zara: sends emails now!
  • noggin143: my first bays running on CERN production cloud with Magnum.
  • sdague: Grenade upgraded to testing stable/liberty -> stable/mitaka and stable/mitaka -> master.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All

PTL Election Conclusion and Results

  • Results are in, congrats to everyone! [2]
  • Appointed PTLs by the TC for leaderless Projects [3]:
    • EC-API: Alex Andrelevine
    • Stable Branch Maintenance: Tony Breeds
    • Winstackers: Claudiu Belu
  • Full thread

Candidate Proposals for Technical Committee Positions Are Now Open

  • Important dates:
    • Nominations open: 2016-03-25 00:00 UTC
    • Nominations close: 2016-03-31 23:59 UTC
    • Election open: 2015-04-01 00:00 UTC
    • Election close: 2015-04-07 23:59 UTC
  • More details on the election [4]
  • Full thread

Release countdown for week R-1, Mar 27 – Apr 1

  • Focus:
    • Project teams following the cycle-with-milestone model should be testing their release Candidates.
    • Project teams following the cycle-with-intermediary model should have at least one Mitaka release and determine if another release is needed before the end of the Mitaka cycle.
    • All projects should be working on release-critical-bugs.
  • General Notes:
    • Global-requirements list is still frozen.
    • If you need to change a dependency for release-critical-bug fix, provide enough details in the change request.
    • Master branches for all projects following cycle-with-milestone are open for Newton development work.
  • Release Actions:
    • Projects following cycle-with-intermediary without clear indication of cutting their final release:
      • bifrost
      • magnum
      • python-searchlightclient
      • senlin-dashboard
      • solum-infra-guestagent
      • os-win
      • cloudkitty
      • tacker
    • These projects should contact the release team or submit a release request to the releases repository as soon as possible. Please submit a request by Wednesday or Thursday at the latest.
      • After March 31st, feature releases will be counted as part of Newton cycle.
    • The release team will have reduced availability between R1 and summit due to travel. Use the dev mailing list to contact the team and include “[release]” in the subject.
  • Full thread

Bots and Their Effects: Gerrit, IRC, other

  • Bots are very handy for doing repetitive tasks.
  • These require require permissions to execute certain actions, require maintenance to ensure they operate as expected and do create output which is music to some and noise to others
  • From an infra meeting [5], this is what has been raised so far:
    • Permissions: having a bot on gerrit with +2 +A is something we would like to avoid
    • “unsanctioned” bots (bots not in infra config files) in channels shared by multiple teams (meeting channels, the -dev channel)
    • Forming a dependence on bots and expecting infra to maintain them ex post facto (example: bot soren maintained until soren didn’t)
    • Causing irritation for others due to the presence of an echoing bot which eventually infra will be asked or expected to mediate
    • Duplication of features, both meetbot and purplebot log channels and host the archives in different locations
    • Canonical bot doesn’t get maintained
  • It’s possible bots that infra currently maintains have features that folks are unaware of.
  • Bots that +2 reviews and approve them can be a problem when taking into account of schedules, outages, gate issues, etc.
  • The Success bot for example is and added feature that takes advantage of the already existing status bot.
  • What are the reasons that people end up writing their own bots instead of contributing to the existing infrastructure bots when applicable?
  • Full thread

Semantic Version On Master Branches After Release Candidates

  • The release team assumes three options someone would choose when installing things:
    • Tagged versions from any branch.
      • Clear, and always produces deployments that are reproduceable, with versions distinct and increasing over time.
    • Untagged versions on a stable branch.
    • Untagged versions on the master branch
      • Options 2 and 3 are something around release cycle boundaries.
      • Produce the same version numbers in different branches for a short period of time.
      • The release team felt it was extremely unlikely that anyone would mix option 2 and 3, because that will make upgrades difficult.
  • Some distributions want to package things that are not tagged as releasable by contributors.
    • Consumers
      • They are in their development cycles and want/need to keep up with trunk throughout the whole cycle.
      • A lot of changes are introduced in a cycle with new features, deprecations, removals, non-backwards compatibility etc. With these continually provided up-to-date packages, they are able to test them right away.
    • It’s a lot of work to package things, and distributions want to do it quickly.
      • If distributions started packaging OpenStack only when the official stable release would be out, it would take distributions several weeks/months to get a stable package out.
      • Projects that use packages to deploy are then delayed for their own release to test these packages their consuming. (e.g. TripleO, Packstack, Kolla, Puppet-OpenStack).
  • Full thread

Our Install Guides Only Cover Defcore – What About Big Tent?

  • Until recently, projects like Manila [6] and Magnum have been accepted in the install guides, but we’re having issues initially because they aren’t considered by the defcore working group.
    • With expansion of projects coming from big tent, the documentation team has projects requesting their install documentation to be accepted.
    • The documentation team today maintain and verifies the install documentation for each release can be a lot of work with the already accepted OpenStack projects.
  • Goals:
    • Make install guides easy to contribute for projects in the big tent.
    • Not end up having the documentation team maintain all projects install documentation.
    • As an operator, I should be able to easily discover install documentation for projects in the big tent.
    • With accessible install documentation projects can hopefully have:
      • Improved adoption
      • More stable work from bug reports with people actually able to install and test the project.
  • Proposal: Install documentation can live in a project’s repository so they can maintain and update.
    • Have all these documentation sources rendered to one location for easy discoverability .
  • Full thread

Technical Committee Highlights March 21, 2016

Long time, no see!

Poppy and our Open Core discussion

The Poppy team applied to add the project under OpenStack governance. Poppy, for those of you not familiar with it, provides CDN as a service. It’s a provisioning service – like other projects in OpenStack, such as Nova – but for CDNs. The overall proposal seemed to be fine except for one thing, there are no open source solutions for CDNs. This means Poppy provisions CDNs based on other commercial services and it requires consumers of Poppy to have an account in one of those CDN services to be able to use it.This presents several issues from an OpenStack perspective. One of them is the one mentioned before, which is that using Poppy requires clouds to rely on other CDNs. Another issue is that there is not good way to test the service in OpenStacks gates as there’s no open source solution for it. The OpenStack infra team won’t be subscribing to any of those CDN services for testing Poppy and nor is the Poppy team either.

There were quite a few discussions on this topic and the TC voted on whether the open core “issue” was critical enough to allow or reject Poppy from the big tent. In the review, there are different points of views on whether Poppy is actually Open Core or not and whether it should be allowed into OpenStack’s big tent regardless of the lack of an open source CDN solution. Ultimately the TC decided to reject the Poppy proposal in a close vote, 7-6.

Mission statement, take 2

As Russell Bryant puts it well in this Foundation mailing list thread, the OpenStack mission statement has held up pretty well for the life of the project. Discussions started about updates to ensure we include some key themes as focus areas for our growing community: interoperability and end users needs. The OpenStack technical committee has created an iteration on the mission statement, and the board is discussing as well. Take a look at the revisions so that our modifications can get buy-in across the community.

New projects

The OpenStack big tent welcomes the following official project teams:

  • Dragonflow, a distributed control plane implementation of Neutron that implements advanced networking services driven by the OpenStack Networking API.
  • Kuryr, a bridge between container framework networking models and the OpenStack networking abstraction.
  • Tacker, a lifecycle management tool providing Network Function Virtualization (NFV) Orchestration services and libraries.
  • EC2API, provides an EC2-compatible API for accessing OpenStack features.

New tag: stable:follows-policy

This new tag allows for indicating which deliverables follow the stable policy. The existing `release:has-stable-branches` tag that had been used so far ended up only describing if a deliverable has a branch called “stable/something”, and therefore did not properly indicate that stable policies are being followed. The new tag aims to cover that area and should eventually completely supersede the existing tag. You can read more about this tag in the tag reference page.


This blog post was co-authored by Flavio Percoco and Thierry Carrez.

OpenStack Developer Mailing List Digest March 12-18

SuccessBot Says

  • Bknudson: we got rid of keystone CLI [in favor of OpenStack Client]
  • jrichli: it has been shown that Swift encryption can pass all functional tests.
  • Bauzas: only a very few Nova changes were missing a reno file, the team is now super-trained on getting them.
  • Odyssey4me: OpenStack-Ansible now has a Designate role ready for testing [1].
  • ttx: Glance is the first project to issue RC1!
  • Mugsie: mlavalle completed the Nova/Neutron/Designate DNS Integration along with docs + clients.
  • Odyssey4me: OpenStack-Ansible has released Kilo 11.2.11. It’s the first time that we’ve used the release team for a release and we love it!
  • Odyssey4me: OpenStack-Ansible Liberty 12.0.8 has been released.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All Successes

Current PTL Election Status

  • Important dates:
    • Election open: 2016-03-18 00:00 UTC
    • Election close: 2016-03-24 23:59 UTC
  • Projects with only one candidate: 41
  • Projects with no PTL candidates:
    • EC2-API
    • Stable Branch Maintenance
    • Winstackers
  • The TC will appoint a new PTL for projects without a candidate [2]
  • Confirmed Candidates [3]

Quotas – Service vs. Library

  • There is a spec for cross-project Quota work [4] that is seeking feedback to move ahead as a service or library.
  • Service:
    • New project to manage quotas for all projects that use the service.
    • It will be responsible for handling the enforcement, management and DB upgrades of the quotas logic for all.
    • However, all projects would have a big dependency on this one service.
  • Library – two ways:
    • Does not deal with database models
      • Maybe a ABC or even a few standard implementation vectors that can be imported into a project space.
      • The project will have it’s own API for quotas and the drivers will enforce different types (e.g. flat quota driver or hierarchical quota driver) with custom/project
      • Project maintains it’s own DB and upgrades.
    • A library that has models for DB tables that the project can import from.
      • Projects will have a handy outline of what the tables should look like.
      • Project has it’s own API and implements drivers in-tree by importing this semi-defined structure.
      • Project maintains it’s own upgrades but will be somewhat influenced by the common repo.
  • Or just avoid all this simply give guidelines.
  • A service has been proposed in the past with projects like Boson [5].
  • Tim Bell raises initially a library would be good.
    • If we can’t agree on a library, we’re unlikely to agree on a service.
    • Would allow for consistent implementation of nested and user quotas.
  • For projects like Trove that need a consistent lock on quotas of all projects, there are race condition issues for projects like Nova that need to be solved first.
  • The main issue with doing a library that was raised in a previous summit was how to tie in database table management with the existing tables owned by a project. While this is not impossible to solve, we need to think about which tools can help with that.
  • Full thread

OpenStack Developer Mailing List Digest March 5-11

SuccessBot Says

  • Ajaeger: All jobs moved from bare-trusty to ubuntu-trusty.
  • Clarkb: Infra is running logstash 2.0 now
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All Successes


  • API guidelines ready for review:
    • Header non proliferation [1]
    • Client interaction guideline for microversions [2]

Election Season, PTL and TC

  • PTL elections:
    • Important dates:
      • Nominations open: 2016-03-11 00:00 UTC
      • Nominations close: 2016-03-17 23:59 UTC
      • Election open: 2016-03-18 00:00 UTC
      • Election close: 2016-03-24 23:59 UTC
    • Every project team must elect a PTL every 6 months.
    • More info and how to submit your candidacy [3].
  • TC elections:
    • Important dates:
      • Nominations open: 2016-03-25 00:00 UTC
      • Nominations close: 2016-03-31 23:59 UTC
      • Election open: 2016-04-01 00:00 UTC
      • Election close: 2016-04-07 23:59 UTC
    • Under the rules of the TC charter [4] we renew 7 TC seats. Seats are valid for one year.
    • More info and how to submit your candidacy [5].
  • Full thread

The stable:follows-policy Tag Is Official, Projects Need To Start Applying For It

  • This is official in the governance documents [6].
  • Projects that follow the stable branch policy [7] should start applying.
  • Full thread


Release Countdown For Week R-3, March 14-18

  • Focus:
    • Projects teams following cycle-with-milestone model:
      • Preparing their first Mitaka release candidate this week.
      • This should be tagged using (X.Y.Z.0rc1) as soon as the pressure to unfreeze master is stronger than the cost of backporting bugfixes.
      • The release team will create stable branches from the release candidate tag points.
    • Project teams following the cycle-with-intermediary model
      • Ensure you have at least one mitaka release.
      • Determine if you need another release before the end of the Mitaka cycle.
    • All feature freeze exceptions that haven’t landed at this point should wait until Newton.
  • General Notes:
    • The global requirements list is frozen. If you need to change a dependency, for a bug fix, please provide enough detail in the change request to allow the requirements review team to evaluate the change.
    • User-facing strings are frozen to allow the translation team time to finish their work.
  • Release Actions:
    • The release team has started creating the stable/mitaka branches for libraries.
    • Follow-up on the mailing list thread [8] to acknowledge and approve the version number to use to create the branch.
      • This only includes projects with release:managed tag.
      • Other projects can post on the thread of request their own branches.
  • Important Dates:
    • RC target week: R-1, March 28 – April 1
    • Mitaka final release: April 4-8
    • Mitaka release schedule [9].
  • Full thread

Reminder: WSME Is Not Being Actively Maintained

  • Chris Dent and Lucas Gomes have been actively verifying bug fixes and keeping things going with WSME, but are no longer interested or have the time to continue. It was also found it never really reached a state where any of the following are true:
    • The WSME code is easy to understand and maintain.
    • WSME provides correct handling of HTTP (notably response status and headers).
    • WSME has an architecture that is suitable for creating modern Python-based web applications.
  • There’s a suggestion for the 24 different OpenStack projects that are using it to move to something else.
  • One big reason for choosing WSME earlier was that it had support for both XML and JSON without application code needing to do anything explicitly.
    • The community has decided to stop providing XML API support and some other tools have been used instead to provide parsing and validation features similar to WSME:
      • JSONSchema
      • Voluptuous
  • Full thread

OpenStack Developer Mailing List Digest Feb 27 – March 4

SuccessBot Says

  • ttx: Mitaka-3 is done.
  • Odyssey4me: OpenStack-Ansible Liberty 12.0.7 is released [1].
  • johnthetubaguy: Nova is down to four pending blueprints for feature freeze now [2], sort of one day left. Better than it was this morning at least.
  • Russellb: Got a set of OVS flows working in OVN that applies security group changes immediately to existing connections.
  • Tell us yours via IRC with a message “#success [insert success]”.
  • All


  • Quotas and Nested Quotas Working group

Outreachy May-Aug 2016: Call For Funding and Mentors

  • Outreachy [5] helps people from groups underrepresented in free and open source software get involved by matching interns with established mentors in the upstream community.
  • We have 10 volunteer mentors for OpenStack this next cycle (May 23-August 23 2016).
    • Learn more and apply to be a mentor [6]
  • Potential sponsors have reached out, but we need more due to the increase in applicants.
    • Each intern is $6,500 for the three-month program.
    • The OpenStack Foundation has confirmed participation.
    • Learn more and apply to be a sponsor [7].
  • Regardless, help spread the world!
  • Full thread

Changing Microversion Headers

  • The API working group would like to change the format of headers used for microversions to make them more future proof before too many projects are using them.
    • Proposed guideline [8].
  • This came up in another guide for header non-proliferation [9].
  • After plenty of discussions, and with projects already deploying microversions (Nova, Ironic, Manila) the proposal is change basic from:
    • X-OpenStack-Nova-API-Version: 2.11
    • OpenStack-Compute-API-Version: 2.11
  • To:
    • OpenStack-API-Version: compute 2.11
  • This allows us to use one header name for multipel services and avoids some of the problems described in the header non-proliferation guideline [9].
  • Full thread

OpenStack Contributor Awards

  • The Foundation would like introduce some informal quirky awards to recognize the extremely valuable work that we all do to make Openstack excel.
  • With many different areas to celebrate, there are a few main chunks of the community that need a little love:
    • Those who might not be aware that they are valued, particularly new contributors
    • Those who are the active glue that binds the community together
    • Those who share their hard-earned knowledge with others and mentor
    • Those who challenge assumptions, and make us think
  • Nominate someone who you think is deserving of an awards [10]!
  • Full thread

Status Of Python 3 In OpenStack Mitaka

  • 13 services were ported to Python 3 during the Mitaka cycle: Cinder, Glance, Heat, Horizon, etc.
  • 9 services still need to be ported
  • Next Milestone: Functional and integration tests
  • “Ported to Python 3” means that all unit tests pass on Python 3.4 which is verified by a voting gate job. It is not enough to run applications in production with Python 3. Integration and functional tests are not run on Python 3 yet.
  • Read the full status post [11] by Victor Stinner.
  • Join Freenode channel #openstack-python3 to discuss and help out!
  • Full thread