Technical Committee Highlights November 27, 2015

Welcome back from Tokyo. While there, I did not realize a three-dimensional subway map exists for Tokyo, but I sure loved traveling on the subway.

Welcoming the latest projects to OpenStack

Speaking of amazing cities and their subway maps, we should mention the growing list of OpenStack projects. We welcome these projects to OpenStack governance since the OpenStack Summit.

    • Monitoring – both OpenStack and its resources: monasca
    • Backups for file systems using OpenStack: freezer
    • Deployment for OpenStack: fuel
    • Cluster management service for Compute and Orchestration: senlin
    • Integrate Hyper-V, Windows and related components into OpenStack: winstackers

During these last weeks, the TC also had other project reviews requests that were put on hold for later once those projects and/or teams are more mature and ready to join the Big Tent.

Reports from TC Working Groups

Project Team Guide

The Project Team Guide team held a session back in Tokyo to discuss the next steps for this project. As a result of that session, more content will be created (or moved from the wiki): add community best practices, detail the benefits and trade-offs of the various release models, introduce deliverables and tags (as maintained in the governance repo’s projects.yaml), detail what common infrastructure projects can build on, and so on.

Communications Group

The communications working group (the one that brings these blog posts to you) will continue to operate under the same model. Announcements, summaries and communications will be sent out as they have been during the last cycle. Remember that feedback is always welcome and the group is looking for ways to improve. Talk back to us, we’re listening!

Project Tags

These are the latest new project tags created by the Technical Committee.

    • team:single-vendor: A new tag was added to communicate when a project team is currently driven by a single organization. We had some discussion about using the term “vendor” or “organization” but this intent is to show the opposite of a diversity in the team’s makeup.
    • assert:supports-upgrade: A new tag has been added to communicate when a project supports upgrades. Teams should apply this tag to their project if they assert they intend to support ongoing upgrades.
    • assert:supports-rolling-upgrade: A new tag has been added to communicate when a project supports rolling upgrades. Team should apply this tag to their project if they assert that operators can expect to perform rolling upgrades of their project, where the service can remain running while the upgrade is performed.

OpenStack Developer Mailing List Digest November 21-27

Success Bot Says

  • vkmc: We got 7 new interns for the Outreachy program December-March 2015 round.
  • bauzas: Reno in place for Nova release notes.
  • AJaeger: We now have Japanese Install Guides published for Liberty [1].
  • robcresswell: Horizon had a bug day! We made good progress on categorizing new bugs and removing old ones, with many members of the community stepping up to help.
  • AJaeger: The OpenStack Architecture Design Guide has been converted to RST [2].
  • AJaeger: The Virtual Machine Image guide has been converted to RST [3].
  • Ajaeger: Japanese Networking Guide is published as draft [4].
  • Tell us yours via IRC with a message “#success [insert success]”.

Release countdown for week R-18, Nov 30 – Dec 4

  • All projects following the cycle-with-milestones release model should be preparing for the milestone tag.
  • Release Actions:
    • All deliverables must have Reno configured before adding a Mitaka-1 milestone tag.
    • Use openstack/releases repository to manage the Mitaka-1 milestone tags.
    • One time change, we will be simplifying how we specify the versions for projects by moving to only using tags instead of the version entry for setup.cfg.
  • Stable release actions: Review stable/liberty branches for patches that have landed since the last release and determine if your deliverables need new tags.
  • Important dates:
    • Deadline for requesting a Mitaka-1 milestone tag: December 3rd
    • Mitaka-2: Jan 19-21
    • Mitaka release schedule [5]

Common OpenStack ‘Third-Party’ CI Solution – DONE!

  • Ramy Asselin who has been spearheading the work for a common third-party CI solution announces things being done!
    • This solution uses the same tools and scripts as the upstream Jenkins CI solution.
    • The documentation for setting up a 3rd party ci system on 2 VMs (1 private that runs the CI jobs, and 1 public that hosts the log files) is now available here [6] or [7].
    • There a number of companies today using this solution for their third party CI needs.

Process Change For Closing Bugs When Patches Merge

  • Today when a patch merges with ‘Closes-Bug’ in the commit message, that marks the associated bug as ‘Fix Committed’ to indicated fixed, but not in the release yet.
  • The release team uses automated tools to mark bugs from ‘Fix Committed’ to ‘Fix Released’, but they’re not reliable due to Launchpad issues.
  • Proposal for automated tools to improve reliability: Patches with ‘Closes-Bug’ in the commit message to have the bug status mark the associated bug as ‘Fix Released’ instead of ‘Fix Committed’.
  • Doug would like to have this be in effect next week.

Move From Active Distrusting Model to Trusting Model

  • Morgan Fainberg writes most projects have a distrusting policy that prevents the following scenario:
    • Employee from Company A writes code
    • Other Employee from Company A reviews code
    • Third Employee from Company A reviews and approves code.
  • Proposal for a trusting model:
    • Code reviews still need 2x Core Reviewers (no change)
    • Code can be developed by a member of the same company as both core reviewers (and approvers).
    • If the trust that is being given via this new policy is violated, the code can [if needed], be reverted (we are using git here) and the actors in question can lose core status (PTL discretion) and the policy can be changed back to the “distrustful” model described above.
  • Dolph Mathews provides scenarios where the “distrusting” model either did or would have helped:
    • Employee is reprimanded by management for not positively reviewing & approving a coworkers patch.
    • A team of employees is pressured to land a feature with as fast as possible. Minimal community involvement means a faster path to “merged,” right?
    • A large group of reviewers from the author’s organization repeatedly throwing *many* careless +1s at a single patch. (These happened to not be cores, but it’s a related organizational behavior taken to an extreme.)

Stable Team PTL Nominations Are Open

  • As discussed [8][9] of setting up a standalone stable maintenance team, we’ll be organizing PTL elections over the coming weeks.
  • Stable team’s mission:
    • Define and enforce the common stable branch policy
    • Educate and accompany projects as they use stable branches
    • Keep CI working on stable branches
    • Mentoring/growing the stable maintenance team
    • Create and improve stable tooling/automation
  • Anyone who successfully contributed to a stable branch back port over the last year can vote in the stable PTL election.
  • If interested, reply to thread with your self-nomination.
  • Deadline is 23:59 UTC Monday, November 30.
  • Election will be the week after.
  • Current candidates:
    • Matt Riedmann [10]
    • Erno Kuvaja [11]

Using Reno For Libraries

  • Libraries have two audiences for release notes:
    • Developers consuming the library.
    • Deployers pushing out new versions of the libraries.
  • To separate the notes from the two audiences and avoid doing manually something that we have been doing automatically, we can use Reno for just deployer release notes.
  • Library repositories that need Reno should have it configured like service projects, with separate jobs and a publishing location different from their developer documentation [12]

Releases VS Development Cycle

  • Thierry writes that as more projects enter the Big Tent, there have recently been questions about release models and development cycle.
  • Some projects want to be independent of the common release cycle and dates, but still keep some adherence to the development cycle. Examples:
    • Gnocchi wants to be completely independent of the common development cycle, but still maintain stable branches.
    • Fuel traditionally makes their releases a few months behind the OpenStack release to integration all the functionality there.
  • All projects above should current be release:independent, until they are able to (and willing) to coordinate their own releases with the projects following the common release cycle.
  • The release team currently supports 3 models:
    • release:cycle-with-milestones is the traditional Nova model with one release at the end of a 6-month dev cycle and a stable branch derived from that.
    • release:cycle-with-intermediary is the traditional Swift model, with releases as-needed during the 6-month development cycle, and a stable branch created from the last release in the cycle.
    • release:independent, for projects that don’t follow the release cycle at all.
      • Can make a release supporting the Mitaka release (including stable updates).
        • Can call the branch stable/mitaka even after the Mitaka release cycle, as long as the branch is stable and not development.
      • Should clearly document that their release is *compatible with* the OpenStack Mitaka release, rather than *part of* the Mitaka release.

OpenStack Developer Mailing List Digest November 14-20

Time to Make Some Assertions About Your Projects

  • The technical committee defined a number of “assert” tags which allows a project team to to make assertions about their own deliverables:
    • assert:follows-standard-deprecation
    • assert:supports-upgrade
    • assert:supports-rolling-upgrade
  • Read more on their definitions [1]
  • Update the project.yaml [2] of which tags apply to your project already.
  • The OpenStack foundation will use “assert”tags very soon in the project navigator [3].

Making Stable Maintenance Its Own OpenStack Project Team (Cont)

  • Continuing discussion from last week [4]…
  • Negatives:
    • Not enough work to warrant a designated “team”.
    • The change is unlikely to bring a meaning full improvement to the situation, sudden new resources.
  • Positives:
    • * An empowered team could tackle new coordination tasks, like engaging more directly in converging stable branch rules across teams, or producing tools.
    • Release management doesn’t overlap anymore with stable branch, so having them under that PTL is limiting and inefficient
    • Reinforcing the branding (by giving it its own team) may encourage more organizations to affect new resources to it
  • Matt Riedemann offers to lead the team.

Release Countdown For Week R-19, November 23-27

  • Mitaka-1 milestone scheduled for December 1-3.
  • Teams should be…
    • Wrapping up incomplete work left over from the end of the Liberty cycle .
    • Finalizing and announcing plans from the summit.
    • Completing specs and blueprints.
  • The openstack/release repository will be used to manage Mitaka 1 milestone tags.
  • Reno [5] will be used instead of Launchpad for tracking completed work. Make sure any release notes done for this cycle are committed to your master branchless before proposing the milestone tag.

New API Guidelines Read for Cross Project Review

  • The following will be merged soon:
    • Adding introduction to API micro version guideline [6].
    • Add description of pagination parameters [7].
    • A guideline for errors [8].
  • These will be brought up in the next cross project meeting [9].

OpenStack Weekly Community Newsletter (Nov. 14 – 20)

A primer on Magnum, OpenStack containers-as-a-service

Adrian Otto, project team lead, on how Magnum works and what problems it can solve for you.

OpenStack Mitaka release: what’s next for Ansible, Oslo and Designate

Meet the project team leads (PTLs) for these OpenStack projects and find out how to get involved.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

OpenStack Weekly Community Newsletter (Nov., 7-13)

Kilowatts for Humanity harnesses the wind and sun to electrify rural communities

OpenStack-powered DreamCompute cloud computing helps keep rural microgrids up and running

OpenStack and Network Function Virtualization, the backstory

Jonathan Bryce, OpenStack executive director, gave a keynote at the OPNFV Summit where he talked about the two communities.

Fighting cloud vendor lock-in, manga style

To celebrate the OpenStack community and Liberty release, the Cloudbase Solutions team created a one-of-a-kind printed comic packed with OpenStack puns and references to manga and kaiju literature.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

OpenStack Developer Mailing List Digest November 7-13

Upcoming Events 

OpenStack Developer Mailing List Digest November 7-13

New Management Tools and Processes for stable/liberty and Mitaka

  • For release management, we used a combination of launchpad milestone pages and our wiki to track changes in releases.
  • We used to pull releases notes for stable point releases at the time of releases.
  • Release managers would work with PTLs and release liaisons at each milestone to update launchpad to reflect the work completed.
  • All this requires a lot of work of the stable maintenance and release teams.
  • To address this work with the ever-growing set of project, the release team is introducing Reno for continuously building release notes as files in-tree.
    • The idea is small YAML files, usually one per note or patch to avoid merge conflicts on back ports, which are then compiled to a readable document for readers.
    • ReStructuredText and Sphinx are supported for converting note files to HTML for publication.
    • Documentation for using Reno is available [1].
  • Release liaisons should create and merge a few patches for each project between now and Mitaka-1 milestone:
    • To the master branch, instructions for publishing the notes. An example of Glance [2].
    • Instructions for publishing in stable/liberty of the project. An example with Glance [3].
    • Relevant jobs in project-config. An example with Glance [4].
    • Reno was not ready before the summit, so the wiki was used for release notes for the initial Liberty releases. Liaisons should covert those notes to Reno YAML files in stable/liberty branch.
  • Use the topic ‘add-reno’ for all patches to track adoption.
  • Once liaisons have done this work, launchpad can stop being used for tracking completed work.
  • Launchpad will still be used for tracking bug reports, for now.

Keeping Juno “alive” For Longer

  • Tony Breeds is seeking feedback on the idea of keeping Juno around a little longer.
  • According to the current user survey [5], Icehouse still has the biggest install base in production clouds. Juno is second, which means if we end of life (EOL) Juno this month, ~75% of production clouds will be running a EOL’d release.
  • The problems with doing this however:
    • CI capacity of running the necessary jobs of making sure stable branches still work.
    • Lack of resources of people who care to make sure the stable branch continues to work.
    • Juno is still tied with Python 2.6.
    • Security support is still needed.
    • Tempest is branchless, so it’s running stable compatible jobs.
  • This is acknowledged as a common request. The common answer being “push more resources in fixing existing stable branches and we might consider it”.
  • Matt Riedmann who works in the trenches of stable branches confirms stable/juno is already a goner due to requirements capping issues. You fix one issue to unwedge a project and with global-requirement syncs, we end breaking 2 other projects. The cycle never ends.
  • This same problem does not exist in stable/kilo, because we’ve done a better job of isolating versions in global-requirements with upper-constants.
  • Sean Dague wonders what are the reasons that keep people from doing upgrades to begin with. Tony is unable to give reasons since some are internal to his companies offering.

Oslo Libraries Dropping Python 2.6 compatibility

  • Davanum notes a patch to drop py26 oslo jobs [6].
  • Jeremy Stanley notes that the infrastructure team plans to remove CentOS 6.X job workers which includes all python 2.6 jobs when stable/juno reaches EOL.

Making Stable Maintenance its own OpenStack Project Team

  • Thierry writes that when the Release Cycle Management team was created, it just happen to contain release management, stable branch management, and vulnerability management.
    • Security Team was created and spun out of the release team today.
  • Proposal: spin out the stable branch maintenance as well.
    • Most of the stable team work used to be stable point release management, but as of stable/liberty this is now done by the release management team and triggered by the project-specific stable maintenance teams, so there is no more overlap in tooling used there.
    • Stable team is now focused on stable branch policies [7], not patches.
    • Doug Hellmann leads the release team and does not have the history Thierry had with stable branch policy.
    • Empowering the team to make its own decisions, visibility, recognition in hopes to encourage more resources being dedicated to it.
      • Defining and enforcing stable branch policy.
      • If team expands, it could own stable branch health/gate fixing. The team could then make decisions on support timeframes.
    • Matthew Treinis disagrees that the team separation solves the problem of getting more people to work on gate issues. Thierry has evidence though that making a its own project will result in additional resources. The other option is to kill stable branches, but as we’ve seen from the Keeping Juno Alive thread, there’s still interest in having them.

OpenStack Weekly Community Newsletter (Oct., 31 – Nov., 6)

Superuser TV

Introduced at the Tokyo Summit, Superuser TV offers community and industry insights, plus educational topics to support the OpenStack community. With content ranging from deployments to diversity, from emerging technologies to cloud strategy, Superuser TV is aiming to provide the community with access to a variety of perspectives and knowledge

October 2015 user survey highlights increasing maturity of OpenStack deployments

60 percent of deployments in production and high rates of adoption of OpenStack’s core services are key findings from the report released by the User Committee. The full report can be downloaded here: openstack.org/user-survey

Eliminating “Not-Invented-Here” Syndrome

Why embracing this notion is the key to unlock an open data center infrastructure, according to Boris Renski, the co-founder and CMO of Mirantis.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

  • Success Bot Says

    • calebb: Shade now supports volume snapshots
    • pleia2: Launched code search [1].
    • sdague: grenade-multinode live upgrade tests now running on nova non voting
    • AJaeger: Contributors guide is published [2].
    • Tell us yours via IRC with a message “#success [insert success]”
  • Upgrading Elastic Search Cluster Monday

    • November 9th 1700UTC
    • Requires a cluster restart, in which people won’t be able to do searches during that time.
    • New features from upgrade:
      • Aggregations
      • Rolling upgrades within a major release
      • Should improve performance
  • Release Team Communication Changes

    • IRC channel change from #openstack-relmgr-office to #openstack-release
    • “Office hours” are being dropped.
      • Just drop by the channel or on the dev list with subject containing “[release]” anytime you need something.
  • Deprecation for Untagged Code

    • Ironic tries to keep master backwards compatible. There are deployers doing continuous deployments of Ironic off of master.
    • Based on the deprecation tag policy [3], it only covers released and tagged code, but not unreleased code or features introduced in an intermediate release.
    • A proposal [4] by Jim:
      • Three month deprecation period is needed for features that were never released.
      • A feature that was introduced in a intermediate release needs to be deprecated in the next intermediate release or coordinated release, and supported until the next release and 3 months.
  • Outcome of Distributed Lock Manager Discussion @ the Summit

    • There was a two part session at the summit [5]
    • Previously, there was an unwritten policy that DLMs should be optional, which led to writing poor DLM-like things backed by databases.
    • Continuing our existing pattern of usage like databases and message-queues, we’ll use an oslo abstraction layer: tooz [6].
    • Current OpenStack project requirements that surfaced in discussion for DLMs, it’s likely that Consul, Etcd and Zoo Keeper will be fine to use via Tooz. No project required a fair locking implementation in the DLM.
    • We want to avoid the situation of unmaintained drivers. We adopted a similar requirement from oslo.messaging driver requirements [7]:
      • Two developers responsible for it
      • Gating functional tests that use dsvm
      • Test drivers in-tree need to be clearly referenced as a test driver in the module name.
    • Davanum brings in Devstack ZooKeeper support [8].
    • An etcd driver is in review for Tooz [9].
    • A Consul driver in Tooz is also planned [10].
    • Concerns raised about the default DLM driver being ZooKeeper:
      • It’s a new platform for operators to understand
      • We don’t know how well ZooKeeper will work with openjvm as oppose to Oracle’s JVM.
  • Troubleshooting Cross-Project Communication

    • Evolve the current cross-project meeting to an “as needed” rotation, held at any time on Tuesdays or Wednesdays.
    • Based on feedback [11] it’s difficult to have meetings at different times on Tuesday and Wednesdays.
    • There was consensus that the meeting can be “as needed” on Tuesday, and that most announcements will happen in the mailing list, and sometimes show up in this weekly Dev List Summary.
  • API For Getting Only Status of Resources

    • Projects like Heat,Tempest, Rally, and other projects that work with resources are polling for updates on asynchronous operations.
    • Boris proposes having API’s expose the ability to just get the status by UUID, instead of fetching all data on a resource.
    • Clint suggests instead of optimizing for polling, we should revisit the proposal for a pub/sub model, so users can subscribe to updates for resources.
    • Sean suggests near term work around is to actually use Searchlight, which today monitors the notification bus for Nova.
      • Searchlight is hitting the Nova API more than ideal, but at least it’s one service.
      • Longer term we need a dedicated event service in OpenStack. Everyone wants web sockets, but anticipating 10,000+ open web sockets, this isn’t just a bit of python code, but a highly optimized server underneath.

OpenStack Weekly Community Newsletter (Oct.,10-16)

Liberty, the 12th release of OpenStack, came out yesterday

With 1,933 individual contributors and 164 organizations contributing to the release, Liberty offers finer-grained management controls, performance enhancements for large deployments and more powerful tools for managing new technologies such as containers in production environments: Learn what’s new

Break down those silos, OpenStack

“The projects need to come together to develop consistent formats, approaches and messaging,” says Rochelle Grober, senior software architect at Huawei Technologies and active member of the OpenStack community.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

What You Need to Know From the Developer’s List

Success Bot Says

  • ttx: Another OpenStack Release!
  • With help of jesusaurus, the infra team has deployed Kibana 3. First steps in upgrading elastic search cluster.
  • shamail: Product Working Group wiki fully updated [1]
  • tristanC: 6 new TC members have been elected[2]
  • AJaeger: OpenStack API Quick Start converted to RST [3], and translated to German [4] and Japanese [5].
  • reed: section 2 and 3 of the OpenStack Shade tutorial merged. Now work on section [6].
  • sirushti: Heat just announced support for Python 3.4 [7].
  • AJaegar: All Documentation manuals have been updated with content for Liberty [8].

Upgrade to Gerrit 2.11

  • The OpenStack Infra team would like to upgrade from Gerrit 2.8 to 2.11.
  • Proposing to do the upgrade shortly after the Mitaka summit.
  • Motivation: Take advantage of some of the new REST API, ssh commands, and stream events features.
  • There is a big UI change in 2.11, in which 2.8 includes both the old and new style.
  • Preview 2.11 [9].
  • If you don’t like Gerrit 2.11, give Gertty [10] a try.

Service Catalog: The Next Generation (Cont.)

  • Continuing from last week summary…
  • Sean Dague realizes that while people want to go in much more radical directions here, we should be careful. This is not a blank slate, as there are enough users using it that we must do careful shifts that enable a new thing similar to the old thing.
    • Moving away from REST is too much, at least in the next 6 to 12 months.
    • Getting a service catalog over REST without auth, or tenant IDs gets us somewhere to figure out a DNS representation.

Establishing Release Liaisons for Mitaka

  • Doug Hellmann writes that the release management team relies on liaisons from each project to be available for coordination for work across all teams.
    • Responsibilities of release liaisons [11].
    • Signup [12].

Release Communication During Mitaka

  • Doug Hellmann begins one of many emails describing difference in the way we handle release management for the Mitaka cycle.
  • In the past, we’ve had communication issues where project team leads didn’t see or pay attention to release related announcements.
  • This email was sent to the list and individual project team leads, to improve the odds that all will see it.
  • “[release]” topic tag on the openstack-dev mailing list will be used.
    • All project team leads and release liaison should configuring their email client to ensure the messages are visible.

Requests + urllib3 + distro package (cont.)

  • Continuing discussions from last week…
  • Robert Collins comments a trivial workaround is to always use virtualenvs and not system-site-packages.
    • Has OpenStack infra team considered using system-site-packages?
      • Yes, but we take advantage of the python ecosystem uploading new releases to PyPI. We can then pretty instantly test compatibility of our software with new releases of dependencies.
  • A way forward is:
    • Get distros to fix their requests python dependencies
      • Ubuntu [13]
      • Fedora [14][15][16]
      • Fix existing known bugs in pip where such dependencies are violated by some operations.
    • Stop using vendorized version of requests and fork the project to use dependencies it should from the start.
    • Convince upstream to stop vendorizing urllib3.
    • Always use distro packages of requests, never from virtual environments.

Scheduler Proposal (cont.)

  • Continuing from last week’s summary…
  • Ed notes that Josh Harlow’s solution isn’t too different than the current design of hosts sending their state to the scheduler.
  • The reason for Cassandra proposal was to eliminate the duplication and have resources being scheduler and the scheduler itself all working with the same data.
    • This is the intent of the current design. The data can never be perfect, so work with what you have and hope the rest of the system deals with your mistakes and gracefully retry. (e.g. scheduled compute node no longer has resources to accommodate a request.)
    • To make this solution possible for downstream distributions and/or OpenStack users) you have to solve:
      • Cassandra developers upstream should start caring about OpenJDK.
      • Or Oracle should make its JVM free software.
    • Clint notes that Cassandra does not recommend OpenJDK [17].
      • Thomas adds:
        • Upstream does not test against OpenJDK.
        • They close bugs without fixing them when it only affects OpenJDK.
  • Thierry is generally negative about Java solutions as this being one of the reasons [18]. The free software JVM is not on par with the non-free JVM. We then indirectly force our users to use a non-free dependency. When the java solution is the only solution for a problem space, that might still be a good trade-off versus reinventing the wheel. However, for distributed locks and sharing state, there are some other good options out there.
    • Clint mentions that Zookeeper is different from Cassandra. He has had success with OpenJDK. It’s also available on Debian/Ubuntu making access for developers much easier.

[1] – https://wiki.openstack.org/wiki/ProductTeam

[2] – https://wiki.openstack.org/wiki/TC_Elections_September/October_2015#Results

[3] – http://developer.openstack.org/api-guide/quick-start/

[4] – http://developer.openstack.org/de/api-guide/quick-start/

[5] – http://developer.openstack.org/api-guide/quick-start/
[6] – https://review.openstack.org/#/c/232810/

[7] – https://review.openstack.org/231557

[8] – http://docs.openstack.org/liberty/

[9] – http://review-dev.openstack.org

[10] – https://pypi.python.org/pypi/gertty

[11] – http://docs.openstack.org/project-team-guide/release-management.html#release-liaisons

[12] – https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

[13] – https://bugs.launchpad.net/ubuntu/+source/python-requests/+bug/1505038

[14] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-20de3774f4

[15] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-1f580ccfa4

[16] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-d7c710a812

[17] – https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155

[18] – https://twitter.com/mipsytipsy/status/596697501991702528

OpenStack Reactions

plug keystone’s authtoken middleware into a service “so graceful”

Engineering team staffing up

I’m very pleased to welcome Mike Perez (a.k.a. thingee) to the Engineering team at the OpenStack Foundation.

Within the Foundation, the Engineering team is tasked with ensuring the long-term health of the OpenStack open source development project. That includes helping in keeping the project infrastructure up and running, organizing the design summits and identifying issues within our open community early and engage to fix them proactively. Mike brings a lot of development experience and community engagement to the table, and I expect we’ll be able to address more issues more quickly as a result of him joining the team.

The team is now composed of two infrastructure engineers, with Jeremy Stanley (current Infrastructure PTL) and Clark Boylan, and two development coordinators (Mike and myself). We are hiring new people (an upstream developer advocate and another development coordinator) to cope with our project continued growth and the increased complexity of the challenges our community encounters.

You can find those job descriptions (and the openings in other teams at the Foundation right now) on the OpenStack job board. If you like the idea of working for a non-profit, have a keen sense of community, cherish having a lot of autonomy and enjoy working in a fast-paced environment, you should definitely consider joining us!

OpenStack Weekly Community Newsletter (Oct. 3 – Oct. 9)

What you need to know about Astara

Henrik Rosendahl, CEO of Akanda, introduces OpenStack’s newest project, an open-source network orchestration platform built by OpenStack operators for OpenStack clouds.

An OpenStack security primer

Meet the troubleshooters and firefighters of the OpenStack Security project and how you can get involved.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Superuser Awards: your vote counts

(voting closes on 10/12 at 11:59 pm PT)

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events

What you need to know from the developer’s list

Success Bot Says

  • harlowja: The OpenStack Universe [1]
  • krotscheck: OpenStack CI posted first package to NPM [2]
  • markvan: OpenStack Chef Cookbook team recently put in place all the pieces to allow for a running a full (devstack like) CI test against all the cookbook projects commits.
  • Tell us yours via IRC with a message “#success [insert success]

Proposed Design Summit allocation

  • Track layout is on the official schedule [3].
  • PTLs or liaisons can start pushing up schedule details. The wiki [4] explains how.
  • Reach out to ttx or thingee on IRC if there are any issues.

Devstack extras.d support going away M-1

  • extras.d  (i.e. devstack plugins) have existed for 10 months.
  • Projects should prioritize getting to the real plugin architecture.
  • Sean compiled a list of the top 25 jobs (by volume) that are giving warnings of breaking [5].

Naming N and O Release Now

  • Sean Dague suggests since we already have the locations for N and O summits, we should start the name polls now.
  • Carol Barrett mentions that the current release naming process only allows the release to be named is announced and no sooner than the opening of development of the previous release [6].
    • Consensus is made to have this changed.
    • Monty mentions this option was discussed in the past, but it was changed because we wanted to keep a sense of ownership by the people who actually worked on the release.
  • Sean will propose for the process to be changed to the next group of TC members.

Requests + urllib3 + distro packages

  • Problems:
    • Requests python library has very specific version of urllib3 it works with. So specific that they aren’t always released.
    • Linux vendors often unbundle urllib3 from requests and then apply what patches were needed to their urllib3, while not updating their requests package dependencies.
    • We use urllib3 and requests in some places, but we don’t mix them up.
    • If we have a distro-alterted requests + pip installed urllib3, request usually breaks.
  • Lots of places the last problem can happen; they all depend on us having a dependency on requests that is compatible with the version installed by the distro, but a urllib3 dependency that triggers an upgrade of just urllib3. When constraints are in use, the requests version has to match the distro requests version exactly, but that will happen from time to time. Examples include:
    • DVSM test jobs where the base image already has python-requests installed.
    • Virtualenvs where the system-site-packages are enabled.
  • Solutions:
    • Make sure none of our testing environments include distro requests packages.
      • Monty notes we’re working hard to make this happen.
    • Make our requirements be tightly matched to what requests needed to deal with unbundling.
      • In progress by Matt Riedemann [7].
    • Teach pip how to identify and avoid this situation by always upgrading requests.
    • Get the distros to stop un-vendoring urllib3.

Scheduler Proposal

  • Ed Leafe several months ago proposed an experiment [8], to see if switching the data model for the Nova scheduler to use Cassandra as the backend would be a significant improvement.
    • Due to the undertakings for Nova in Liberty, it was agreed this shouldn’t be focused on at the time, but the proposal could still be made.
    • Ed finished writing up the proposal [9].
  • Chris Friesen mentions some points that might need further discussion:
    • Some resources (RAM) only require tracking amounts. Others resources (CPUs, PCI devices) require tracking allocation of specific host resources.
    • If all of Nova’s scheduler and resource tracking was to switch to Cassandra, how do we handle pinned CPUs and PCI devices that are associated with a specific instance in the Nova database?
    • To avoid races we need to:
      • Serialize the entire scheduling operation.
      • Make the evaluation of filters and claiming of resources a single atomic database transaction.
  • Zane finds the database to use is irrelevant to the proposal, and really this is about moving the scheduling from a distributed collection python processes with ad-hoc synchronization, into the database.
  • Maish notes that by adding a new database solution, we are up to three different solutions in OpenStack:
    • MySQL
    • MongoDB
    • Cassandra
  • Joshua Harlow provides a solution using a distributed lock manager:
    • Compute nodes gather information of vms, memory free, cpu usage, memory used, etc and pushes the information to be saved in a node in said DLM backend.
    • All schedulers watch for pushed updates and update an in-memory cache of the information of all hypervisors.
    • Besides the initial read-once on start up, this avoids reading large sets periodically.
    • This information can also be used to know if a compute node is still running or not. This eliminates the need to do queries and periodic writes to the Nova database.

Service Catalog: TNG

  • Last cross project meeting had good discussions with the next generation of the Keystone service catalog. Information has been recorded in an etherpad [10].
  • Sean Dague suggests we need a dedicated workgroup meeting to keep things going.
  • Monty provides a collection of the existing service catalogs [11].
  • Adam Young suggests using DNS for the service catalog.
    • David Stanek put together an implementation [12].

[1] – https://gist.github.com/harlowja/e5838f65edb0d3a9ff8a

[2] – https://www.npmjs.com/package/eslint-config-openstack

[3] – https://mitakadesignsummit.sched.org/

[4] – https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs

[5] – http://lists.openstack.org/pipermail/openstack-dev/2015-October/076559.html

[6] – http://governance.openstack.org/reference/release-naming.html

[7] – https://review.openstack.org/#/c/213310/

[8] – http://lists.openstack.org/pipermail/openstack-dev/2015-July/069593.html

[9] – http://blog.leafe.com/reimagining_scheduler/

[10] – https://etherpad.openstack.org/p/mitaka-service-catalog

[11] – https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

[12] – https://gist.github.com/dstanek/093f851fdea8ebfd893d