OpenStack Weekly Community Newsletter (Nov., 7-13)

Kilowatts for Humanity harnesses the wind and sun to electrify rural communities

OpenStack-powered DreamCompute cloud computing helps keep rural microgrids up and running

OpenStack and Network Function Virtualization, the backstory

Jonathan Bryce, OpenStack executive director, gave a keynote at the OPNFV Summit where he talked about the two communities.

Fighting cloud vendor lock-in, manga style

To celebrate the OpenStack community and Liberty release, the Cloudbase Solutions team created a one-of-a-kind printed comic packed with OpenStack puns and references to manga and kaiju literature.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

OpenStack Developer Mailing List Digest November 7-13

Upcoming Events 

OpenStack Developer Mailing List Digest November 7-13

New Management Tools and Processes for stable/liberty and Mitaka

  • For release management, we used a combination of launchpad milestone pages and our wiki to track changes in releases.
  • We used to pull releases notes for stable point releases at the time of releases.
  • Release managers would work with PTLs and release liaisons at each milestone to update launchpad to reflect the work completed.
  • All this requires a lot of work of the stable maintenance and release teams.
  • To address this work with the ever-growing set of project, the release team is introducing Reno for continuously building release notes as files in-tree.
    • The idea is small YAML files, usually one per note or patch to avoid merge conflicts on back ports, which are then compiled to a readable document for readers.
    • ReStructuredText and Sphinx are supported for converting note files to HTML for publication.
    • Documentation for using Reno is available [1].
  • Release liaisons should create and merge a few patches for each project between now and Mitaka-1 milestone:
    • To the master branch, instructions for publishing the notes. An example of Glance [2].
    • Instructions for publishing in stable/liberty of the project. An example with Glance [3].
    • Relevant jobs in project-config. An example with Glance [4].
    • Reno was not ready before the summit, so the wiki was used for release notes for the initial Liberty releases. Liaisons should covert those notes to Reno YAML files in stable/liberty branch.
  • Use the topic ‘add-reno’ for all patches to track adoption.
  • Once liaisons have done this work, launchpad can stop being used for tracking completed work.
  • Launchpad will still be used for tracking bug reports, for now.

Keeping Juno “alive” For Longer

  • Tony Breeds is seeking feedback on the idea of keeping Juno around a little longer.
  • According to the current user survey [5], Icehouse still has the biggest install base in production clouds. Juno is second, which means if we end of life (EOL) Juno this month, ~75% of production clouds will be running a EOL’d release.
  • The problems with doing this however:
    • CI capacity of running the necessary jobs of making sure stable branches still work.
    • Lack of resources of people who care to make sure the stable branch continues to work.
    • Juno is still tied with Python 2.6.
    • Security support is still needed.
    • Tempest is branchless, so it’s running stable compatible jobs.
  • This is acknowledged as a common request. The common answer being “push more resources in fixing existing stable branches and we might consider it”.
  • Matt Riedmann who works in the trenches of stable branches confirms stable/juno is already a goner due to requirements capping issues. You fix one issue to unwedge a project and with global-requirement syncs, we end breaking 2 other projects. The cycle never ends.
  • This same problem does not exist in stable/kilo, because we’ve done a better job of isolating versions in global-requirements with upper-constants.
  • Sean Dague wonders what are the reasons that keep people from doing upgrades to begin with. Tony is unable to give reasons since some are internal to his companies offering.

Oslo Libraries Dropping Python 2.6 compatibility

  • Davanum notes a patch to drop py26 oslo jobs [6].
  • Jeremy Stanley notes that the infrastructure team plans to remove CentOS 6.X job workers which includes all python 2.6 jobs when stable/juno reaches EOL.

Making Stable Maintenance its own OpenStack Project Team

  • Thierry writes that when the Release Cycle Management team was created, it just happen to contain release management, stable branch management, and vulnerability management.
    • Security Team was created and spun out of the release team today.
  • Proposal: spin out the stable branch maintenance as well.
    • Most of the stable team work used to be stable point release management, but as of stable/liberty this is now done by the release management team and triggered by the project-specific stable maintenance teams, so there is no more overlap in tooling used there.
    • Stable team is now focused on stable branch policies [7], not patches.
    • Doug Hellmann leads the release team and does not have the history Thierry had with stable branch policy.
    • Empowering the team to make its own decisions, visibility, recognition in hopes to encourage more resources being dedicated to it.
      • Defining and enforcing stable branch policy.
      • If team expands, it could own stable branch health/gate fixing. The team could then make decisions on support timeframes.
    • Matthew Treinis disagrees that the team separation solves the problem of getting more people to work on gate issues. Thierry has evidence though that making a its own project will result in additional resources. The other option is to kill stable branches, but as we’ve seen from the Keeping Juno Alive thread, there’s still interest in having them.

OpenStack Weekly Community Newsletter (Oct., 31 – Nov., 6)

Superuser TV

Introduced at the Tokyo Summit, Superuser TV offers community and industry insights, plus educational topics to support the OpenStack community. With content ranging from deployments to diversity, from emerging technologies to cloud strategy, Superuser TV is aiming to provide the community with access to a variety of perspectives and knowledge

October 2015 user survey highlights increasing maturity of OpenStack deployments

60 percent of deployments in production and high rates of adoption of OpenStack’s core services are key findings from the report released by the User Committee. The full report can be downloaded here: openstack.org/user-survey

Eliminating “Not-Invented-Here” Syndrome

Why embracing this notion is the key to unlock an open data center infrastructure, according to Boris Renski, the co-founder and CMO of Mirantis.

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

  • Success Bot Says

    • calebb: Shade now supports volume snapshots
    • pleia2: Launched code search [1].
    • sdague: grenade-multinode live upgrade tests now running on nova non voting
    • AJaeger: Contributors guide is published [2].
    • Tell us yours via IRC with a message “#success [insert success]”
  • Upgrading Elastic Search Cluster Monday

    • November 9th 1700UTC
    • Requires a cluster restart, in which people won’t be able to do searches during that time.
    • New features from upgrade:
      • Aggregations
      • Rolling upgrades within a major release
      • Should improve performance
  • Release Team Communication Changes

    • IRC channel change from #openstack-relmgr-office to #openstack-release
    • “Office hours” are being dropped.
      • Just drop by the channel or on the dev list with subject containing “[release]” anytime you need something.
  • Deprecation for Untagged Code

    • Ironic tries to keep master backwards compatible. There are deployers doing continuous deployments of Ironic off of master.
    • Based on the deprecation tag policy [3], it only covers released and tagged code, but not unreleased code or features introduced in an intermediate release.
    • A proposal [4] by Jim:
      • Three month deprecation period is needed for features that were never released.
      • A feature that was introduced in a intermediate release needs to be deprecated in the next intermediate release or coordinated release, and supported until the next release and 3 months.
  • Outcome of Distributed Lock Manager Discussion @ the Summit

    • There was a two part session at the summit [5]
    • Previously, there was an unwritten policy that DLMs should be optional, which led to writing poor DLM-like things backed by databases.
    • Continuing our existing pattern of usage like databases and message-queues, we’ll use an oslo abstraction layer: tooz [6].
    • Current OpenStack project requirements that surfaced in discussion for DLMs, it’s likely that Consul, Etcd and Zoo Keeper will be fine to use via Tooz. No project required a fair locking implementation in the DLM.
    • We want to avoid the situation of unmaintained drivers. We adopted a similar requirement from oslo.messaging driver requirements [7]:
      • Two developers responsible for it
      • Gating functional tests that use dsvm
      • Test drivers in-tree need to be clearly referenced as a test driver in the module name.
    • Davanum brings in Devstack ZooKeeper support [8].
    • An etcd driver is in review for Tooz [9].
    • A Consul driver in Tooz is also planned [10].
    • Concerns raised about the default DLM driver being ZooKeeper:
      • It’s a new platform for operators to understand
      • We don’t know how well ZooKeeper will work with openjvm as oppose to Oracle’s JVM.
  • Troubleshooting Cross-Project Communication

    • Evolve the current cross-project meeting to an “as needed” rotation, held at any time on Tuesdays or Wednesdays.
    • Based on feedback [11] it’s difficult to have meetings at different times on Tuesday and Wednesdays.
    • There was consensus that the meeting can be “as needed” on Tuesday, and that most announcements will happen in the mailing list, and sometimes show up in this weekly Dev List Summary.
  • API For Getting Only Status of Resources

    • Projects like Heat,Tempest, Rally, and other projects that work with resources are polling for updates on asynchronous operations.
    • Boris proposes having API’s expose the ability to just get the status by UUID, instead of fetching all data on a resource.
    • Clint suggests instead of optimizing for polling, we should revisit the proposal for a pub/sub model, so users can subscribe to updates for resources.
    • Sean suggests near term work around is to actually use Searchlight, which today monitors the notification bus for Nova.
      • Searchlight is hitting the Nova API more than ideal, but at least it’s one service.
      • Longer term we need a dedicated event service in OpenStack. Everyone wants web sockets, but anticipating 10,000+ open web sockets, this isn’t just a bit of python code, but a highly optimized server underneath.

OpenStack Weekly Community Newsletter (Oct.,10-16)

Liberty, the 12th release of OpenStack, came out yesterday

With 1,933 individual contributors and 164 organizations contributing to the release, Liberty offers finer-grained management controls, performance enhancements for large deployments and more powerful tools for managing new technologies such as containers in production environments: Learn what’s new

Break down those silos, OpenStack

“The projects need to come together to develop consistent formats, approaches and messaging,” says Rochelle Grober, senior software architect at Huawei Technologies and active member of the OpenStack community.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

What You Need to Know From the Developer’s List

Success Bot Says

  • ttx: Another OpenStack Release!
  • With help of jesusaurus, the infra team has deployed Kibana 3. First steps in upgrading elastic search cluster.
  • shamail: Product Working Group wiki fully updated [1]
  • tristanC: 6 new TC members have been elected[2]
  • AJaeger: OpenStack API Quick Start converted to RST [3], and translated to German [4] and Japanese [5].
  • reed: section 2 and 3 of the OpenStack Shade tutorial merged. Now work on section [6].
  • sirushti: Heat just announced support for Python 3.4 [7].
  • AJaegar: All Documentation manuals have been updated with content for Liberty [8].

Upgrade to Gerrit 2.11

  • The OpenStack Infra team would like to upgrade from Gerrit 2.8 to 2.11.
  • Proposing to do the upgrade shortly after the Mitaka summit.
  • Motivation: Take advantage of some of the new REST API, ssh commands, and stream events features.
  • There is a big UI change in 2.11, in which 2.8 includes both the old and new style.
  • Preview 2.11 [9].
  • If you don’t like Gerrit 2.11, give Gertty [10] a try.

Service Catalog: The Next Generation (Cont.)

  • Continuing from last week summary…
  • Sean Dague realizes that while people want to go in much more radical directions here, we should be careful. This is not a blank slate, as there are enough users using it that we must do careful shifts that enable a new thing similar to the old thing.
    • Moving away from REST is too much, at least in the next 6 to 12 months.
    • Getting a service catalog over REST without auth, or tenant IDs gets us somewhere to figure out a DNS representation.

Establishing Release Liaisons for Mitaka

  • Doug Hellmann writes that the release management team relies on liaisons from each project to be available for coordination for work across all teams.
    • Responsibilities of release liaisons [11].
    • Signup [12].

Release Communication During Mitaka

  • Doug Hellmann begins one of many emails describing difference in the way we handle release management for the Mitaka cycle.
  • In the past, we’ve had communication issues where project team leads didn’t see or pay attention to release related announcements.
  • This email was sent to the list and individual project team leads, to improve the odds that all will see it.
  • “[release]” topic tag on the openstack-dev mailing list will be used.
    • All project team leads and release liaison should configuring their email client to ensure the messages are visible.

Requests + urllib3 + distro package (cont.)

  • Continuing discussions from last week…
  • Robert Collins comments a trivial workaround is to always use virtualenvs and not system-site-packages.
    • Has OpenStack infra team considered using system-site-packages?
      • Yes, but we take advantage of the python ecosystem uploading new releases to PyPI. We can then pretty instantly test compatibility of our software with new releases of dependencies.
  • A way forward is:
    • Get distros to fix their requests python dependencies
      • Ubuntu [13]
      • Fedora [14][15][16]
      • Fix existing known bugs in pip where such dependencies are violated by some operations.
    • Stop using vendorized version of requests and fork the project to use dependencies it should from the start.
    • Convince upstream to stop vendorizing urllib3.
    • Always use distro packages of requests, never from virtual environments.

Scheduler Proposal (cont.)

  • Continuing from last week’s summary…
  • Ed notes that Josh Harlow’s solution isn’t too different than the current design of hosts sending their state to the scheduler.
  • The reason for Cassandra proposal was to eliminate the duplication and have resources being scheduler and the scheduler itself all working with the same data.
    • This is the intent of the current design. The data can never be perfect, so work with what you have and hope the rest of the system deals with your mistakes and gracefully retry. (e.g. scheduled compute node no longer has resources to accommodate a request.)
    • To make this solution possible for downstream distributions and/or OpenStack users) you have to solve:
      • Cassandra developers upstream should start caring about OpenJDK.
      • Or Oracle should make its JVM free software.
    • Clint notes that Cassandra does not recommend OpenJDK [17].
      • Thomas adds:
        • Upstream does not test against OpenJDK.
        • They close bugs without fixing them when it only affects OpenJDK.
  • Thierry is generally negative about Java solutions as this being one of the reasons [18]. The free software JVM is not on par with the non-free JVM. We then indirectly force our users to use a non-free dependency. When the java solution is the only solution for a problem space, that might still be a good trade-off versus reinventing the wheel. However, for distributed locks and sharing state, there are some other good options out there.
    • Clint mentions that Zookeeper is different from Cassandra. He has had success with OpenJDK. It’s also available on Debian/Ubuntu making access for developers much easier.

[1] – https://wiki.openstack.org/wiki/ProductTeam

[2] – https://wiki.openstack.org/wiki/TC_Elections_September/October_2015#Results

[3] – http://developer.openstack.org/api-guide/quick-start/

[4] – http://developer.openstack.org/de/api-guide/quick-start/

[5] – http://developer.openstack.org/api-guide/quick-start/
[6] – https://review.openstack.org/#/c/232810/

[7] – https://review.openstack.org/231557

[8] – http://docs.openstack.org/liberty/

[9] – http://review-dev.openstack.org

[10] – https://pypi.python.org/pypi/gertty

[11] – http://docs.openstack.org/project-team-guide/release-management.html#release-liaisons

[12] – https://wiki.openstack.org/wiki/CrossProjectLiaisons#Release_management

[13] – https://bugs.launchpad.net/ubuntu/+source/python-requests/+bug/1505038

[14] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-20de3774f4

[15] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-1f580ccfa4

[16] – https://bodhi.fedoraproject.org/updates/FEDORA-2015-d7c710a812

[17] – https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/StartupChecks.java#L153-L155

[18] – https://twitter.com/mipsytipsy/status/596697501991702528

OpenStack Reactions

plug keystone’s authtoken middleware into a service “so graceful”

Engineering team staffing up

I’m very pleased to welcome Mike Perez (a.k.a. thingee) to the Engineering team at the OpenStack Foundation.

Within the Foundation, the Engineering team is tasked with ensuring the long-term health of the OpenStack open source development project. That includes helping in keeping the project infrastructure up and running, organizing the design summits and identifying issues within our open community early and engage to fix them proactively. Mike brings a lot of development experience and community engagement to the table, and I expect we’ll be able to address more issues more quickly as a result of him joining the team.

The team is now composed of two infrastructure engineers, with Jeremy Stanley (current Infrastructure PTL) and Clark Boylan, and two development coordinators (Mike and myself). We are hiring new people (an upstream developer advocate and another development coordinator) to cope with our project continued growth and the increased complexity of the challenges our community encounters.

You can find those job descriptions (and the openings in other teams at the Foundation right now) on the OpenStack job board. If you like the idea of working for a non-profit, have a keen sense of community, cherish having a lot of autonomy and enjoy working in a fast-paced environment, you should definitely consider joining us!

OpenStack Weekly Community Newsletter (Oct. 3 – Oct. 9)

What you need to know about Astara

Henrik Rosendahl, CEO of Akanda, introduces OpenStack’s newest project, an open-source network orchestration platform built by OpenStack operators for OpenStack clouds.

An OpenStack security primer

Meet the troubleshooters and firefighters of the OpenStack Security project and how you can get involved.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Superuser Awards: your vote counts

(voting closes on 10/12 at 11:59 pm PT)

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events

What you need to know from the developer’s list

Success Bot Says

  • harlowja: The OpenStack Universe [1]
  • krotscheck: OpenStack CI posted first package to NPM [2]
  • markvan: OpenStack Chef Cookbook team recently put in place all the pieces to allow for a running a full (devstack like) CI test against all the cookbook projects commits.
  • Tell us yours via IRC with a message “#success [insert success]

Proposed Design Summit allocation

  • Track layout is on the official schedule [3].
  • PTLs or liaisons can start pushing up schedule details. The wiki [4] explains how.
  • Reach out to ttx or thingee on IRC if there are any issues.

Devstack extras.d support going away M-1

  • extras.d  (i.e. devstack plugins) have existed for 10 months.
  • Projects should prioritize getting to the real plugin architecture.
  • Sean compiled a list of the top 25 jobs (by volume) that are giving warnings of breaking [5].

Naming N and O Release Now

  • Sean Dague suggests since we already have the locations for N and O summits, we should start the name polls now.
  • Carol Barrett mentions that the current release naming process only allows the release to be named is announced and no sooner than the opening of development of the previous release [6].
    • Consensus is made to have this changed.
    • Monty mentions this option was discussed in the past, but it was changed because we wanted to keep a sense of ownership by the people who actually worked on the release.
  • Sean will propose for the process to be changed to the next group of TC members.

Requests + urllib3 + distro packages

  • Problems:
    • Requests python library has very specific version of urllib3 it works with. So specific that they aren’t always released.
    • Linux vendors often unbundle urllib3 from requests and then apply what patches were needed to their urllib3, while not updating their requests package dependencies.
    • We use urllib3 and requests in some places, but we don’t mix them up.
    • If we have a distro-alterted requests + pip installed urllib3, request usually breaks.
  • Lots of places the last problem can happen; they all depend on us having a dependency on requests that is compatible with the version installed by the distro, but a urllib3 dependency that triggers an upgrade of just urllib3. When constraints are in use, the requests version has to match the distro requests version exactly, but that will happen from time to time. Examples include:
    • DVSM test jobs where the base image already has python-requests installed.
    • Virtualenvs where the system-site-packages are enabled.
  • Solutions:
    • Make sure none of our testing environments include distro requests packages.
      • Monty notes we’re working hard to make this happen.
    • Make our requirements be tightly matched to what requests needed to deal with unbundling.
      • In progress by Matt Riedemann [7].
    • Teach pip how to identify and avoid this situation by always upgrading requests.
    • Get the distros to stop un-vendoring urllib3.

Scheduler Proposal

  • Ed Leafe several months ago proposed an experiment [8], to see if switching the data model for the Nova scheduler to use Cassandra as the backend would be a significant improvement.
    • Due to the undertakings for Nova in Liberty, it was agreed this shouldn’t be focused on at the time, but the proposal could still be made.
    • Ed finished writing up the proposal [9].
  • Chris Friesen mentions some points that might need further discussion:
    • Some resources (RAM) only require tracking amounts. Others resources (CPUs, PCI devices) require tracking allocation of specific host resources.
    • If all of Nova’s scheduler and resource tracking was to switch to Cassandra, how do we handle pinned CPUs and PCI devices that are associated with a specific instance in the Nova database?
    • To avoid races we need to:
      • Serialize the entire scheduling operation.
      • Make the evaluation of filters and claiming of resources a single atomic database transaction.
  • Zane finds the database to use is irrelevant to the proposal, and really this is about moving the scheduling from a distributed collection python processes with ad-hoc synchronization, into the database.
  • Maish notes that by adding a new database solution, we are up to three different solutions in OpenStack:
    • MySQL
    • MongoDB
    • Cassandra
  • Joshua Harlow provides a solution using a distributed lock manager:
    • Compute nodes gather information of vms, memory free, cpu usage, memory used, etc and pushes the information to be saved in a node in said DLM backend.
    • All schedulers watch for pushed updates and update an in-memory cache of the information of all hypervisors.
    • Besides the initial read-once on start up, this avoids reading large sets periodically.
    • This information can also be used to know if a compute node is still running or not. This eliminates the need to do queries and periodic writes to the Nova database.

Service Catalog: TNG

  • Last cross project meeting had good discussions with the next generation of the Keystone service catalog. Information has been recorded in an etherpad [10].
  • Sean Dague suggests we need a dedicated workgroup meeting to keep things going.
  • Monty provides a collection of the existing service catalogs [11].
  • Adam Young suggests using DNS for the service catalog.
    • David Stanek put together an implementation [12].

[1] – https://gist.github.com/harlowja/e5838f65edb0d3a9ff8a

[2] – https://www.npmjs.com/package/eslint-config-openstack

[3] – https://mitakadesignsummit.sched.org/

[4] – https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs

[5] – http://lists.openstack.org/pipermail/openstack-dev/2015-October/076559.html

[6] – http://governance.openstack.org/reference/release-naming.html

[7] – https://review.openstack.org/#/c/213310/

[8] – http://lists.openstack.org/pipermail/openstack-dev/2015-July/069593.html

[9] – http://blog.leafe.com/reimagining_scheduler/

[10] – https://etherpad.openstack.org/p/mitaka-service-catalog

[11] – https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

[12] – https://gist.github.com/dstanek/093f851fdea8ebfd893d

Technical Committee Highlights October 7, 2015

It’s a busy week pre-summit and pre-release so let’s jump into it.

Technical Committee Elections this week

There are 19 candidates for 6 positions in this six-month cycle of TC elections. Active Technical Contributors (ATCs) should have an email in their inbox to use to vote in this election. Sign into review.openstack.org and then go to Settings > Contact Information to find your Preferred Email, which is where the ballot was sent. Vote by 23:59 UTC October 9th 2015.

Cross-project sessions at the Summit

By Friday October 9, please add your suggestions for cross-project sessions to this site: http://odsreg.openstack.org/ by clicking Suggest session. On Monday October 12, the technical committee will review all the submissions and fit them into the cross-project time slots at the Summit. There are about 26 proposals now for about twenty 40-minute time slots on the schedule.

Applying for OpenStack governance

One team’s application prompted a discussion on whether or not a project should apply to the TC right away or if it should have some amount of history of operating as an OpenStack project first. The consensus on that application was that we should wait and let the project get going first. The team was Kosmos, a new project, formed initially from members of Designate (DNS as a Service) and Neutron Load Balancing as a Service teams, so they thought they’d go ahead and apply for governance to get started. We had enough discussion about the thinking around “people we know” versus “showing your work” that we decided to ask them to wait and show more evidence that their work is going forward. We recognize that teams do need to be governed to get access to some services like docs hosting and integrated testing.

The last week of September we discussed both CloudKitty and Juju Charms for Ubuntu’s applications. We decided to delay a decision on the Juju charms application until there is something substantial in the repositories since they can be set up without being “official” now. That also gives time for understanding any licensing complexity. CloudKitty, a billing solution for OpenStack, was accepted for governance.

Astara a.k.a Akanda

Another interesting application discussion came this week when a Neutron driver, Astara, from the company Akanda, asked for governance in the “big tent” rather than adding their driver as a repo to the neutron team. The TC worked with both the outgoing and incoming PTLs on this one as it was a new concept for everyone. We approved their application to governance and now are reviewing the second patch in the series, adding the Astara driver to the Neutron repository collection.

Removing projects from the big tent

When the PTL elections rolled around we discovered that MagnetoDB had no contributors for the last release and decided to retire the project. We had a discussion about formalizing the policy and ensuring the communications about the removal are clear. With the easier inclusion policies in place, it also makes sense that rotating out could happen smoothly as well.

OpenStack Training Sessions available in Tokyo

The OpenStack Summit in Tokyo is just around the corner and we wanted to update you on some upcoming OpenStack Training and Certification classes that will take place in Tokyo around the Summit dates (October 27-30). For those of you traveling, you might want to take advantage of these offers and make the most of your visit.

Training Offerings:

OpenStack Networking Fundamentals Express by PLUMgrid
  • Date: October 26, 2015
  • Duration: 1 day
  • Time: 9am-5pm
  • Location: Iidabashi First Tower 2-6-1 Koraku, Bunkyo-ku Tokyo, 112-8560 Japan
  • Register here
OpenStack Networking Bootcamp Express by PLUMgrid
  • Date: October 30, 2015
  • Duration: 1 day
  • Time: 9am-5pm
  • Location: Iidabashi First Tower 2-6-1 Koraku, Bunkyo-ku Tokyo, 112-8560 Japan
  • Register here
MidoDay Tokyo by Midokura
  • Date: October 26, 2015
  • Duration: 1 day
  • Time: 9am-7pm
  • Location: ARK Mori Buidling at ARK Hills 1-12-32 Akasaka, Minato-ku Tokyo 107-6001 Japan
  • Register here
OpenStack Integration with Big Cloud Fabric by Big Switch Networks
  • Date: October 27-30, 2015
  • Duration: 30 minutes
  • Time: on-demand
  • Location: online
  • Register here
Mirantis OpenStack Bootcamp (OS100)
  • Dates: October 24- October 26
  • Duration: 3 Days
  • Time: 9 am – 5 pm
  • Location: Tokyo, Japan, TBD
  • Register here
If you have any questions regarding the above Training and Certifications, please contact the Member companies directly for more information.

 

See you in Tokyo!

 

Tags:

OpenStack Weekly Community Newsletter (Sept. 26 – Oct. 2)

53 things that are new in OpenStack Liberty

Another autumn, another OpenStack release.  OpenStack’s 12th release, Liberty, is due on October 15, and release candidates are already being made available.  But what can we expect from the last six months of development?

App Developers: First App on OpenStack Tutorial Needs You

The tutorial that guides new developers to deploy their first application on OpenStack is complete for Apache Libcloud and needs help for new languages and SDKs.

The Road to Tokyo 

  • The OpenStack Summit Tokyo will sell out! Register NOW!
  • The deadline to request registration transfers and refunds is October 12. Please email [email protected] with any requests or questions
  • The schedule and mobile app for the OpenStack Summit in Tokyo are now available
  • Speakers, sponsors, and ATC registration codes deactivate 10/19, so register now!

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications 

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

PTL election results are in!

Proposed Design Summit track/room/time allocation

OpenStack Weekly Community Newsletter (Sept.,19 – 25)

Register for OpenStack Summit Tokyo 2015

Full access registration prices increase on 9/29 at 11:59pm PT

This trove of user stories highlights what people want in OpenStack

The Product Working Group recently launched a Git repository to collect requirements ranging from encrypted storage to rolling upgrades.

How storage works in containers

Nick Gerasimatos, senior director of cloud services engineering at FICO, dives into the lack of persistent storage with containers and how Docker volumes and data containers provide a fix.

The Road to Tokyo 

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

Handling Projects with no PTL candidates

  • The technical committee will appoint a PTL [1] if there is no identified eligible candidate.
  • Appointed PTLs:
    • Robert Clark nominated security PTL
    • Serg Melikyan nominated Murano PTL
    • Douglas Mendizabal nominated Barbican PTL
    • Election for Magnum PTL between Adrian Otto and Hongbin Lu
  • MagnetoDB being abandoned, not PTL was chosen. Instead, it will be fast tracked for removal [2] from the official list of OpenStack projects.

Release help needed – we are incompatible with ourselves

  • Robert Collins raises that while the constraints system in place for how we recognize incompatible components in our release is working, the release team needs help from the community to fix the incompatibility that exists so we can cut the full Liberty release.
  • Issues that exist:
    • OpenStack client not able to create an image.
      • Fix is merged [3].

Semver and dependency changes

  • Robert Collins says currently we don’t provide guidance on what happens when the only changes in a project are dependency changes and a release is made.
    • Today the release team treats dependency changes as a “feature” rather than a bug fix. (e.g. if the previous release 1.2.3, requirement sync happens, the next version is  1.3.0.)
    • Reasons behind this are complex, some guidance is needed to answer the questions:
      • Is this requirements change an API break?
      • Is this requirements change feature work?
      • Is this requirements change a bug fix?
    • All of these questions can be true. Some examples:
      • If library X exposes library Y as part of its API, and library Y’s dependency changes from Y>=1 to Y>=2. X does this because it needs a feature from Y==2.
      • Library Y is not exposed in library X’s API, however, a change in Y’s dependencies for X will impact users who independently use Y. (ignoring intricacies surrounding PIP here.)
    • Proposal:
      • nothing -> a requirement -> major version change
      • 1.x.y -> 2.0.0 -> major version change
      • 1.2.y -> 1.3.0 -> minor version change
      • 1.2.3. -> 1.2.4 -> patch version change
    • Thierry Carrez is ok with the last two proposals. Defaulting to a major version bump sounds a bit overkill.
    • Doug Hellmann reminds that we can’t assume the dependency is using semver itself. We would need something other than the version number to determine from the outside whether the API is in fact breaking.
    • Due this problem being so complicated, Doug would rather over-simplify the analysis of requirements updates until we’re better at identifying our own API breaking changes and differentiating between features and bug fixes. This will allow us to be consistent, if not 100% correct.

Criteria for applying vulnerability:managed tag

  • The vulnerability management processes were brought to the big tent a couple of months ago [4].
  • Initially we listed what repos the Vulnerability Manage Team (VMT) tracks for vulnerabilities.
    • TC decided to change this from repos to deliverables as per-repo tags were decided against.
  • Jeremy Stanley provides transparency for how deliverables can qualify for this tag:
    • All repos in a given deliverable must qualify. If one repo doesn’t, they all don’t in a given deliverable.
    • Points of contact:
      • Deliverable must have a dedicated point of contact.
        • The VMT will engage with this contact to triage reports.
      • A group of core reviewers should be part of the <project>-corsec team and will:
        • Confirm whether a bug is accurate/applicable.
        • Provide pre-approval of patches attached to reports.
    • The PTLs for the deliverable should agree to act as (or delegate) a vulnerability management liaison to escalate for the VMT.
    • The bug tracker for the repos within a deliverable should have a bug tracker configured to initially provide access to privately reported vulnerabilities initially to the VMT.
      • The VMT will determine if the vulnerability is reported against the correct deliverable and redirect when possible.
    • The deliverable repos should undergo a third-party review/audit looking for obvious signs of insecure design or risky implementation.
      • This aims to keep the VMT’s workload down.
      • It has not been identified who will perform this review. Maybe the OpenStack Security project team?
  • Review of this proposal is posted [5].

Consistent support for SSL termination proxies across all APIs

  • While a bug [6] was being debugged, an issue was identified where an API sitting behind a proxy performing SSL termination would not generate the right redirection (http instead of https).
    • A review [7] has been given to have a config option ‘secure_proxy_ssl_header’ which allows the API service to detect ssl termination based on the header X-Forwarded-Proto.
  • Another bug back in 2014 was open with the same issue [8].
    • Several projects applied patches to fix this issue, but are inconsistent:
      • Glance added public_endpoint config
      • Cinder added public_endpoint config
      • Heat added secure_proxy_ssl_header config (through heat.api.openstack:sslmiddleware_filter)
      • Nova added secure_proxy_ssl_header config
      • Manila added secure_proxy_ssl_header config (through oslo_middleware.ssl:SSLMiddleware.factory)
      • Ironic added public_endpoint config
      • Keystone added secure_proxy_ssl_header config
  • Ben Nemec comments that solving this at the service level is the wrong place, due to this requiring changes in a bunch of different API services. Instead it should be fixed in the proxy that’s converting the traffic to http.
    • Sean Dague notes that this should be done in the service catalog. Service discovery is a base thing that all services should use in talking to each other. There’s an OpenStack spec [9] in an attempt to get a handle on this
    • Mathieu Gagné notes that this won’t work. There is a “split view” in the service catalog where internal management nodes have a specific catalog and public nodes (for users) have a different one.
      • Suggestion to use oslo middleware SSL for supporting the ‘secure_proxy_ssl_header’ config to fix the problem with little code.
      • Sean agrees the split view needs to be considered, however, another layer of work shouldn’t decide if the service catalog is a good way to keep track of what our service urls are. We shouldn’t push a model where Keystone is optional.
      • Sean notes that while the ‘secure_proxy_ssl_header’ config solution supports the cases where there’s a 1 HA proxy with SSL termination to 1 API service, it may not work in the cases where there’s a 1 API service to N HA Proxies for:
        • Clients needing to understand the “Location:” headers correctly.
        • Libraries like request/phatomjs can follow the links provided in REST documents, and they’re correct.
        • The minority of services that “operate without keystone” as an option are able to function.
      • ZZelle mentions this solution does not work in the cases when the service itself acts as a proxy (e.g. nova image-list).
      • Would this solution work in the HA Proxy case where there is one terminating address for multiple backend servers?
        • Yes, by honoring the headers X-Forwarded-Host and X-Forwarded-Port which are set by HTTP proxies, making WSGI applications unaware of the fact that there is a request in front of them.
  • Jamie Lennox says this same topic came up as a block in a Devstack patch to get TLS testing in the gate with HA Proxy.
    • Longer term solution, transition services to use relative links.
      • This is a pretty serious change. We’ve been returning absolute URLs forever, so assuming that all client code out there would with relative code is a big assumption. That’s a major version for sure.
  • Sean agrees that we have enough pieces to get something better with proxy headers for Mitaka. We can do the remaining edge cases if clean up the service catalog use.

[1] – http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html

[2] – https://review.openstack.org/#/c/224743/

[3] – https://review.openstack.org/#/c/225443/

[4] – http://governance.openstack.org/reference/tags/vulnerability_managed.html

[5] – https://review.openstack.org/#/c/226869/

[6] – https://bugs.launchpad.net/python-novaclient/+bug/1491579

[7] – https://review.openstack.org/#/c/206479/

[8] – https://bugs.launchpad.net/glance/+bug/1384379

[9] – https://review.openstack.org/#/c/181393/