OpenStack Weekly Community Newsletter (Oct. 3 – Oct. 9)

What you need to know about Astara

Henrik Rosendahl, CEO of Akanda, introduces OpenStack’s newest project, an open-source network orchestration platform built by OpenStack operators for OpenStack clouds.

An OpenStack security primer

Meet the troubleshooters and firefighters of the OpenStack Security project and how you can get involved.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Superuser Awards: your vote counts

(voting closes on 10/12 at 11:59 pm PT)

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events

What you need to know from the developer’s list

Success Bot Says

  • harlowja: The OpenStack Universe [1]
  • krotscheck: OpenStack CI posted first package to NPM [2]
  • markvan: OpenStack Chef Cookbook team recently put in place all the pieces to allow for a running a full (devstack like) CI test against all the cookbook projects commits.
  • Tell us yours via IRC with a message “#success [insert success]

Proposed Design Summit allocation

  • Track layout is on the official schedule [3].
  • PTLs or liaisons can start pushing up schedule details. The wiki [4] explains how.
  • Reach out to ttx or thingee on IRC if there are any issues.

Devstack extras.d support going away M-1

  • extras.d  (i.e. devstack plugins) have existed for 10 months.
  • Projects should prioritize getting to the real plugin architecture.
  • Sean compiled a list of the top 25 jobs (by volume) that are giving warnings of breaking [5].

Naming N and O Release Now

  • Sean Dague suggests since we already have the locations for N and O summits, we should start the name polls now.
  • Carol Barrett mentions that the current release naming process only allows the release to be named is announced and no sooner than the opening of development of the previous release [6].
    • Consensus is made to have this changed.
    • Monty mentions this option was discussed in the past, but it was changed because we wanted to keep a sense of ownership by the people who actually worked on the release.
  • Sean will propose for the process to be changed to the next group of TC members.

Requests + urllib3 + distro packages

  • Problems:
    • Requests python library has very specific version of urllib3 it works with. So specific that they aren’t always released.
    • Linux vendors often unbundle urllib3 from requests and then apply what patches were needed to their urllib3, while not updating their requests package dependencies.
    • We use urllib3 and requests in some places, but we don’t mix them up.
    • If we have a distro-alterted requests + pip installed urllib3, request usually breaks.
  • Lots of places the last problem can happen; they all depend on us having a dependency on requests that is compatible with the version installed by the distro, but a urllib3 dependency that triggers an upgrade of just urllib3. When constraints are in use, the requests version has to match the distro requests version exactly, but that will happen from time to time. Examples include:
    • DVSM test jobs where the base image already has python-requests installed.
    • Virtualenvs where the system-site-packages are enabled.
  • Solutions:
    • Make sure none of our testing environments include distro requests packages.
      • Monty notes we’re working hard to make this happen.
    • Make our requirements be tightly matched to what requests needed to deal with unbundling.
      • In progress by Matt Riedemann [7].
    • Teach pip how to identify and avoid this situation by always upgrading requests.
    • Get the distros to stop un-vendoring urllib3.

Scheduler Proposal

  • Ed Leafe several months ago proposed an experiment [8], to see if switching the data model for the Nova scheduler to use Cassandra as the backend would be a significant improvement.
    • Due to the undertakings for Nova in Liberty, it was agreed this shouldn’t be focused on at the time, but the proposal could still be made.
    • Ed finished writing up the proposal [9].
  • Chris Friesen mentions some points that might need further discussion:
    • Some resources (RAM) only require tracking amounts. Others resources (CPUs, PCI devices) require tracking allocation of specific host resources.
    • If all of Nova’s scheduler and resource tracking was to switch to Cassandra, how do we handle pinned CPUs and PCI devices that are associated with a specific instance in the Nova database?
    • To avoid races we need to:
      • Serialize the entire scheduling operation.
      • Make the evaluation of filters and claiming of resources a single atomic database transaction.
  • Zane finds the database to use is irrelevant to the proposal, and really this is about moving the scheduling from a distributed collection python processes with ad-hoc synchronization, into the database.
  • Maish notes that by adding a new database solution, we are up to three different solutions in OpenStack:
    • MySQL
    • MongoDB
    • Cassandra
  • Joshua Harlow provides a solution using a distributed lock manager:
    • Compute nodes gather information of vms, memory free, cpu usage, memory used, etc and pushes the information to be saved in a node in said DLM backend.
    • All schedulers watch for pushed updates and update an in-memory cache of the information of all hypervisors.
    • Besides the initial read-once on start up, this avoids reading large sets periodically.
    • This information can also be used to know if a compute node is still running or not. This eliminates the need to do queries and periodic writes to the Nova database.

Service Catalog: TNG

  • Last cross project meeting had good discussions with the next generation of the Keystone service catalog. Information has been recorded in an etherpad [10].
  • Sean Dague suggests we need a dedicated workgroup meeting to keep things going.
  • Monty provides a collection of the existing service catalogs [11].
  • Adam Young suggests using DNS for the service catalog.
    • David Stanek put together an implementation [12].

[1] – https://gist.github.com/harlowja/e5838f65edb0d3a9ff8a

[2] – https://www.npmjs.com/package/eslint-config-openstack

[3] – https://mitakadesignsummit.sched.org/

[4] – https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs

[5] – http://lists.openstack.org/pipermail/openstack-dev/2015-October/076559.html

[6] – http://governance.openstack.org/reference/release-naming.html

[7] – https://review.openstack.org/#/c/213310/

[8] – http://lists.openstack.org/pipermail/openstack-dev/2015-July/069593.html

[9] – http://blog.leafe.com/reimagining_scheduler/

[10] – https://etherpad.openstack.org/p/mitaka-service-catalog

[11] – https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

[12] – https://gist.github.com/dstanek/093f851fdea8ebfd893d

Leave a Reply

Your email address will not be published. Required fields are marked *