Engineering team staffing up

I’m very pleased to welcome Mike Perez (a.k.a. thingee) to the Engineering team at the OpenStack Foundation.

Within the Foundation, the Engineering team is tasked with ensuring the long-term health of the OpenStack open source development project. That includes helping in keeping the project infrastructure up and running, organizing the design summits and identifying issues within our open community early and engage to fix them proactively. Mike brings a lot of development experience and community engagement to the table, and I expect we’ll be able to address more issues more quickly as a result of him joining the team.

The team is now composed of two infrastructure engineers, with Jeremy Stanley (current Infrastructure PTL) and Clark Boylan, and two development coordinators (Mike and myself). We are hiring new people (an upstream developer advocate and another development coordinator) to cope with our project continued growth and the increased complexity of the challenges our community encounters.

You can find those job descriptions (and the openings in other teams at the Foundation right now) on the OpenStack job board. If you like the idea of working for a non-profit, have a keen sense of community, cherish having a lot of autonomy and enjoy working in a fast-paced environment, you should definitely consider joining us!

OpenStack Weekly Community Newsletter (Oct. 3 – Oct. 9)

What you need to know about Astara

Henrik Rosendahl, CEO of Akanda, introduces OpenStack’s newest project, an open-source network orchestration platform built by OpenStack operators for OpenStack clouds.

An OpenStack security primer

Meet the troubleshooters and firefighters of the OpenStack Security project and how you can get involved.

The Road to Tokyo

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications

Superuser Awards: your vote counts

(voting closes on 10/12 at 11:59 pm PT)

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events

What you need to know from the developer’s list

Success Bot Says

  • harlowja: The OpenStack Universe [1]
  • krotscheck: OpenStack CI posted first package to NPM [2]
  • markvan: OpenStack Chef Cookbook team recently put in place all the pieces to allow for a running a full (devstack like) CI test against all the cookbook projects commits.
  • Tell us yours via IRC with a message “#success [insert success]

Proposed Design Summit allocation

  • Track layout is on the official schedule [3].
  • PTLs or liaisons can start pushing up schedule details. The wiki [4] explains how.
  • Reach out to ttx or thingee on IRC if there are any issues.

Devstack extras.d support going away M-1

  • extras.d  (i.e. devstack plugins) have existed for 10 months.
  • Projects should prioritize getting to the real plugin architecture.
  • Sean compiled a list of the top 25 jobs (by volume) that are giving warnings of breaking [5].

Naming N and O Release Now

  • Sean Dague suggests since we already have the locations for N and O summits, we should start the name polls now.
  • Carol Barrett mentions that the current release naming process only allows the release to be named is announced and no sooner than the opening of development of the previous release [6].
    • Consensus is made to have this changed.
    • Monty mentions this option was discussed in the past, but it was changed because we wanted to keep a sense of ownership by the people who actually worked on the release.
  • Sean will propose for the process to be changed to the next group of TC members.

Requests + urllib3 + distro packages

  • Problems:
    • Requests python library has very specific version of urllib3 it works with. So specific that they aren’t always released.
    • Linux vendors often unbundle urllib3 from requests and then apply what patches were needed to their urllib3, while not updating their requests package dependencies.
    • We use urllib3 and requests in some places, but we don’t mix them up.
    • If we have a distro-alterted requests + pip installed urllib3, request usually breaks.
  • Lots of places the last problem can happen; they all depend on us having a dependency on requests that is compatible with the version installed by the distro, but a urllib3 dependency that triggers an upgrade of just urllib3. When constraints are in use, the requests version has to match the distro requests version exactly, but that will happen from time to time. Examples include:
    • DVSM test jobs where the base image already has python-requests installed.
    • Virtualenvs where the system-site-packages are enabled.
  • Solutions:
    • Make sure none of our testing environments include distro requests packages.
      • Monty notes we’re working hard to make this happen.
    • Make our requirements be tightly matched to what requests needed to deal with unbundling.
      • In progress by Matt Riedemann [7].
    • Teach pip how to identify and avoid this situation by always upgrading requests.
    • Get the distros to stop un-vendoring urllib3.

Scheduler Proposal

  • Ed Leafe several months ago proposed an experiment [8], to see if switching the data model for the Nova scheduler to use Cassandra as the backend would be a significant improvement.
    • Due to the undertakings for Nova in Liberty, it was agreed this shouldn’t be focused on at the time, but the proposal could still be made.
    • Ed finished writing up the proposal [9].
  • Chris Friesen mentions some points that might need further discussion:
    • Some resources (RAM) only require tracking amounts. Others resources (CPUs, PCI devices) require tracking allocation of specific host resources.
    • If all of Nova’s scheduler and resource tracking was to switch to Cassandra, how do we handle pinned CPUs and PCI devices that are associated with a specific instance in the Nova database?
    • To avoid races we need to:
      • Serialize the entire scheduling operation.
      • Make the evaluation of filters and claiming of resources a single atomic database transaction.
  • Zane finds the database to use is irrelevant to the proposal, and really this is about moving the scheduling from a distributed collection python processes with ad-hoc synchronization, into the database.
  • Maish notes that by adding a new database solution, we are up to three different solutions in OpenStack:
    • MySQL
    • MongoDB
    • Cassandra
  • Joshua Harlow provides a solution using a distributed lock manager:
    • Compute nodes gather information of vms, memory free, cpu usage, memory used, etc and pushes the information to be saved in a node in said DLM backend.
    • All schedulers watch for pushed updates and update an in-memory cache of the information of all hypervisors.
    • Besides the initial read-once on start up, this avoids reading large sets periodically.
    • This information can also be used to know if a compute node is still running or not. This eliminates the need to do queries and periodic writes to the Nova database.

Service Catalog: TNG

  • Last cross project meeting had good discussions with the next generation of the Keystone service catalog. Information has been recorded in an etherpad [10].
  • Sean Dague suggests we need a dedicated workgroup meeting to keep things going.
  • Monty provides a collection of the existing service catalogs [11].
  • Adam Young suggests using DNS for the service catalog.
    • David Stanek put together an implementation [12].

[1] – https://gist.github.com/harlowja/e5838f65edb0d3a9ff8a

[2] – https://www.npmjs.com/package/eslint-config-openstack

[3] – https://mitakadesignsummit.sched.org/

[4] – https://wiki.openstack.org/wiki/Design_Summit/SchedulingForPTLs

[5] – http://lists.openstack.org/pipermail/openstack-dev/2015-October/076559.html

[6] – http://governance.openstack.org/reference/release-naming.html

[7] – https://review.openstack.org/#/c/213310/

[8] – http://lists.openstack.org/pipermail/openstack-dev/2015-July/069593.html

[9] – http://blog.leafe.com/reimagining_scheduler/

[10] – https://etherpad.openstack.org/p/mitaka-service-catalog

[11] – https://wiki.openstack.org/wiki/API_Working_Group/Current_Design/Service_Catalog

[12] – https://gist.github.com/dstanek/093f851fdea8ebfd893d

Technical Committee Highlights October 7, 2015

It’s a busy week pre-summit and pre-release so let’s jump into it.

Technical Committee Elections this week

There are 19 candidates for 6 positions in this six-month cycle of TC elections. Active Technical Contributors (ATCs) should have an email in their inbox to use to vote in this election. Sign into review.openstack.org and then go to Settings > Contact Information to find your Preferred Email, which is where the ballot was sent. Vote by 23:59 UTC October 9th 2015.

Cross-project sessions at the Summit

By Friday October 9, please add your suggestions for cross-project sessions to this site: http://odsreg.openstack.org/ by clicking Suggest session. On Monday October 12, the technical committee will review all the submissions and fit them into the cross-project time slots at the Summit. There are about 26 proposals now for about twenty 40-minute time slots on the schedule.

Applying for OpenStack governance

One team’s application prompted a discussion on whether or not a project should apply to the TC right away or if it should have some amount of history of operating as an OpenStack project first. The consensus on that application was that we should wait and let the project get going first. The team was Kosmos, a new project, formed initially from members of Designate (DNS as a Service) and Neutron Load Balancing as a Service teams, so they thought they’d go ahead and apply for governance to get started. We had enough discussion about the thinking around “people we know” versus “showing your work” that we decided to ask them to wait and show more evidence that their work is going forward. We recognize that teams do need to be governed to get access to some services like docs hosting and integrated testing.

The last week of September we discussed both CloudKitty and Juju Charms for Ubuntu’s applications. We decided to delay a decision on the Juju charms application until there is something substantial in the repositories since they can be set up without being “official” now. That also gives time for understanding any licensing complexity. CloudKitty, a billing solution for OpenStack, was accepted for governance.

Astara a.k.a Akanda

Another interesting application discussion came this week when a Neutron driver, Astara, from the company Akanda, asked for governance in the “big tent” rather than adding their driver as a repo to the neutron team. The TC worked with both the outgoing and incoming PTLs on this one as it was a new concept for everyone. We approved their application to governance and now are reviewing the second patch in the series, adding the Astara driver to the Neutron repository collection.

Removing projects from the big tent

When the PTL elections rolled around we discovered that MagnetoDB had no contributors for the last release and decided to retire the project. We had a discussion about formalizing the policy and ensuring the communications about the removal are clear. With the easier inclusion policies in place, it also makes sense that rotating out could happen smoothly as well.

OpenStack Training Sessions available in Tokyo

The OpenStack Summit in Tokyo is just around the corner and we wanted to update you on some upcoming OpenStack Training and Certification classes that will take place in Tokyo around the Summit dates (October 27-30). For those of you traveling, you might want to take advantage of these offers and make the most of your visit.

Training Offerings:

OpenStack Networking Fundamentals Express by PLUMgrid
  • Date: October 26, 2015
  • Duration: 1 day
  • Time: 9am-5pm
  • Location: Iidabashi First Tower 2-6-1 Koraku, Bunkyo-ku Tokyo, 112-8560 Japan
  • Register here
OpenStack Networking Bootcamp Express by PLUMgrid
  • Date: October 30, 2015
  • Duration: 1 day
  • Time: 9am-5pm
  • Location: Iidabashi First Tower 2-6-1 Koraku, Bunkyo-ku Tokyo, 112-8560 Japan
  • Register here
MidoDay Tokyo by Midokura
  • Date: October 26, 2015
  • Duration: 1 day
  • Time: 9am-7pm
  • Location: ARK Mori Buidling at ARK Hills 1-12-32 Akasaka, Minato-ku Tokyo 107-6001 Japan
  • Register here
OpenStack Integration with Big Cloud Fabric by Big Switch Networks
  • Date: October 27-30, 2015
  • Duration: 30 minutes
  • Time: on-demand
  • Location: online
  • Register here
Mirantis OpenStack Bootcamp (OS100)
  • Dates: October 24- October 26
  • Duration: 3 Days
  • Time: 9 am – 5 pm
  • Location: Tokyo, Japan, TBD
  • Register here
If you have any questions regarding the above Training and Certifications, please contact the Member companies directly for more information.

 

See you in Tokyo!

 

Tags:

OpenStack Weekly Community Newsletter (Sept. 26 – Oct. 2)

53 things that are new in OpenStack Liberty

Another autumn, another OpenStack release.  OpenStack’s 12th release, Liberty, is due on October 15, and release candidates are already being made available.  But what can we expect from the last six months of development?

App Developers: First App on OpenStack Tutorial Needs You

The tutorial that guides new developers to deploy their first application on OpenStack is complete for Apache Libcloud and needs help for new languages and SDKs.

The Road to Tokyo 

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications 

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

PTL election results are in!

Proposed Design Summit track/room/time allocation

OpenStack Weekly Community Newsletter (Sept.,19 – 25)

Register for OpenStack Summit Tokyo 2015

Full access registration prices increase on 9/29 at 11:59pm PT

This trove of user stories highlights what people want in OpenStack

The Product Working Group recently launched a Git repository to collect requirements ranging from encrypted storage to rolling upgrades.

How storage works in containers

Nick Gerasimatos, senior director of cloud services engineering at FICO, dives into the lack of persistent storage with containers and how Docker volumes and data containers provide a fix.

The Road to Tokyo 

Community feedback

OpenStack is always interested in feedback and community contributions, if you would like to see a new section in the OpenStack Weekly Community Newsletter or have ideas on how to present content please get in touch: [email protected].

Reports from Previous Events 

Deadlines and Contributors Notifications

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

Handling Projects with no PTL candidates

  • The technical committee will appoint a PTL [1] if there is no identified eligible candidate.
  • Appointed PTLs:
    • Robert Clark nominated security PTL
    • Serg Melikyan nominated Murano PTL
    • Douglas Mendizabal nominated Barbican PTL
    • Election for Magnum PTL between Adrian Otto and Hongbin Lu
  • MagnetoDB being abandoned, not PTL was chosen. Instead, it will be fast tracked for removal [2] from the official list of OpenStack projects.

Release help needed – we are incompatible with ourselves

  • Robert Collins raises that while the constraints system in place for how we recognize incompatible components in our release is working, the release team needs help from the community to fix the incompatibility that exists so we can cut the full Liberty release.
  • Issues that exist:
    • OpenStack client not able to create an image.
      • Fix is merged [3].

Semver and dependency changes

  • Robert Collins says currently we don’t provide guidance on what happens when the only changes in a project are dependency changes and a release is made.
    • Today the release team treats dependency changes as a “feature” rather than a bug fix. (e.g. if the previous release 1.2.3, requirement sync happens, the next version is  1.3.0.)
    • Reasons behind this are complex, some guidance is needed to answer the questions:
      • Is this requirements change an API break?
      • Is this requirements change feature work?
      • Is this requirements change a bug fix?
    • All of these questions can be true. Some examples:
      • If library X exposes library Y as part of its API, and library Y’s dependency changes from Y>=1 to Y>=2. X does this because it needs a feature from Y==2.
      • Library Y is not exposed in library X’s API, however, a change in Y’s dependencies for X will impact users who independently use Y. (ignoring intricacies surrounding PIP here.)
    • Proposal:
      • nothing -> a requirement -> major version change
      • 1.x.y -> 2.0.0 -> major version change
      • 1.2.y -> 1.3.0 -> minor version change
      • 1.2.3. -> 1.2.4 -> patch version change
    • Thierry Carrez is ok with the last two proposals. Defaulting to a major version bump sounds a bit overkill.
    • Doug Hellmann reminds that we can’t assume the dependency is using semver itself. We would need something other than the version number to determine from the outside whether the API is in fact breaking.
    • Due this problem being so complicated, Doug would rather over-simplify the analysis of requirements updates until we’re better at identifying our own API breaking changes and differentiating between features and bug fixes. This will allow us to be consistent, if not 100% correct.

Criteria for applying vulnerability:managed tag

  • The vulnerability management processes were brought to the big tent a couple of months ago [4].
  • Initially we listed what repos the Vulnerability Manage Team (VMT) tracks for vulnerabilities.
    • TC decided to change this from repos to deliverables as per-repo tags were decided against.
  • Jeremy Stanley provides transparency for how deliverables can qualify for this tag:
    • All repos in a given deliverable must qualify. If one repo doesn’t, they all don’t in a given deliverable.
    • Points of contact:
      • Deliverable must have a dedicated point of contact.
        • The VMT will engage with this contact to triage reports.
      • A group of core reviewers should be part of the <project>-corsec team and will:
        • Confirm whether a bug is accurate/applicable.
        • Provide pre-approval of patches attached to reports.
    • The PTLs for the deliverable should agree to act as (or delegate) a vulnerability management liaison to escalate for the VMT.
    • The bug tracker for the repos within a deliverable should have a bug tracker configured to initially provide access to privately reported vulnerabilities initially to the VMT.
      • The VMT will determine if the vulnerability is reported against the correct deliverable and redirect when possible.
    • The deliverable repos should undergo a third-party review/audit looking for obvious signs of insecure design or risky implementation.
      • This aims to keep the VMT’s workload down.
      • It has not been identified who will perform this review. Maybe the OpenStack Security project team?
  • Review of this proposal is posted [5].

Consistent support for SSL termination proxies across all APIs

  • While a bug [6] was being debugged, an issue was identified where an API sitting behind a proxy performing SSL termination would not generate the right redirection (http instead of https).
    • A review [7] has been given to have a config option ‘secure_proxy_ssl_header’ which allows the API service to detect ssl termination based on the header X-Forwarded-Proto.
  • Another bug back in 2014 was open with the same issue [8].
    • Several projects applied patches to fix this issue, but are inconsistent:
      • Glance added public_endpoint config
      • Cinder added public_endpoint config
      • Heat added secure_proxy_ssl_header config (through heat.api.openstack:sslmiddleware_filter)
      • Nova added secure_proxy_ssl_header config
      • Manila added secure_proxy_ssl_header config (through oslo_middleware.ssl:SSLMiddleware.factory)
      • Ironic added public_endpoint config
      • Keystone added secure_proxy_ssl_header config
  • Ben Nemec comments that solving this at the service level is the wrong place, due to this requiring changes in a bunch of different API services. Instead it should be fixed in the proxy that’s converting the traffic to http.
    • Sean Dague notes that this should be done in the service catalog. Service discovery is a base thing that all services should use in talking to each other. There’s an OpenStack spec [9] in an attempt to get a handle on this
    • Mathieu Gagné notes that this won’t work. There is a “split view” in the service catalog where internal management nodes have a specific catalog and public nodes (for users) have a different one.
      • Suggestion to use oslo middleware SSL for supporting the ‘secure_proxy_ssl_header’ config to fix the problem with little code.
      • Sean agrees the split view needs to be considered, however, another layer of work shouldn’t decide if the service catalog is a good way to keep track of what our service urls are. We shouldn’t push a model where Keystone is optional.
      • Sean notes that while the ‘secure_proxy_ssl_header’ config solution supports the cases where there’s a 1 HA proxy with SSL termination to 1 API service, it may not work in the cases where there’s a 1 API service to N HA Proxies for:
        • Clients needing to understand the “Location:” headers correctly.
        • Libraries like request/phatomjs can follow the links provided in REST documents, and they’re correct.
        • The minority of services that “operate without keystone” as an option are able to function.
      • ZZelle mentions this solution does not work in the cases when the service itself acts as a proxy (e.g. nova image-list).
      • Would this solution work in the HA Proxy case where there is one terminating address for multiple backend servers?
        • Yes, by honoring the headers X-Forwarded-Host and X-Forwarded-Port which are set by HTTP proxies, making WSGI applications unaware of the fact that there is a request in front of them.
  • Jamie Lennox says this same topic came up as a block in a Devstack patch to get TLS testing in the gate with HA Proxy.
    • Longer term solution, transition services to use relative links.
      • This is a pretty serious change. We’ve been returning absolute URLs forever, so assuming that all client code out there would with relative code is a big assumption. That’s a major version for sure.
  • Sean agrees that we have enough pieces to get something better with proxy headers for Mitaka. We can do the remaining edge cases if clean up the service catalog use.

[1] – http://governance.openstack.org/resolutions/20141128-elections-process-for-leaderless-programs.html

[2] – https://review.openstack.org/#/c/224743/

[3] – https://review.openstack.org/#/c/225443/

[4] – http://governance.openstack.org/reference/tags/vulnerability_managed.html

[5] – https://review.openstack.org/#/c/226869/

[6] – https://bugs.launchpad.net/python-novaclient/+bug/1491579

[7] – https://review.openstack.org/#/c/206479/

[8] – https://bugs.launchpad.net/glance/+bug/1384379

[9] – https://review.openstack.org/#/c/181393/

Technical Committee Highlights September 25, 2015

Tagging efforts for diversity and deprecation policies

Since tagging projects for a percentage of diverse affiliations, we are also discussing the idea of an inverse tag to indicate non-diversity. Some of us on the TC are unsure that a lack of diversity is an indicator that the project isn’t useful or successful, especially in the early days of a project’s maturation. Others would like to indicate that a lack of diversity could mean support would be easy to pull.

We’ve passed a “follows-standard-deprecation” tag that projects can apply to in order to indicate their deprecation policies follow the standard for all OpenStack projects. No projects are asserting following it yet, but we want to make sure the community knows we’ve written the policies for configuration option and potential feature deprecation.

Code of Conduct (CoC)

Cindy Pallares reached out to the Technical Committee with a proposal to improve OpenStack’s current CoC. After reviewing CoCs from other communities and listening to the feedback provided by Cindy and other members, the OpenStack’s CoC will be updated and improved to make it organized by context, such as online or in events. Please, do stay in touch and follow this discussion closely as it’ll have an impact on the whole community. We do have Code of Conducts in place for both contexts, but we actively review these as the community grows, diversifies, and matures to ensure it meets the needs of all members.

Considering additional programming languages

There’s a new resolution defining how projects written in other programming languages should be evaluated. The resolution talks about how we mostly contain and plan for Python projects with JavaScript (Dashboard) and bash (DevStack) also enabled in time. The discussion started a few meetings ago where things like big tent, community impact, infrastructure impact, technology impact were highlighted and discussed at a high level. Since this topic impacts the whole community, we appreciated the input we got and welcome all to read and understand the resolution. We came to a conclusion that we do consider additional languages but need to ensure common process and tooling for infrastructure, testing, and documentation as part of the larger picture especially for OpenStack services.

Handling project teams with no candidate PTLs

We got “our garumphs out” over time zone confusion with the recent candidacy round and approved PTLs for these projects:

  • Security: Robert Clark
  • Key Manager (barbican): Douglas Mendizabal
  • Application Catalog (murano): Serg Melikyan

For the Containers (magnum) project, the two candidates Hongbin Lu and Adrian Otto agreed to an election to resolve a timing problem with the candidate submissions. The election officials agreed they could run another PTL election just for magnum, so look for that ballot in your inbox if you worked on the magnum codebase in the last six months.

MagnetoDB didn’t receive any candidacies. Unfortunately, this project hasn’t received contributions in a while and it’s being considered for removal from the Big Tent. Read more about the current discussion on the review itself.

As a reminder, our charter currently states, “Voters for a given project’s PTL election are the active project contributors (“APC”), which are a subset of the Foundation Individual Members. Individual Members who committed a change to a repository of a project over the last two 6-month release cycles are considered APC for that project team.” The names of repositories of projects are kept in the projects.yaml file in the openstack/governance repository.

Applications incoming and welcoming

As always we are busy reviewing incoming applications to OpenStack governance.

The Monasca project has been asked to keep working on their open processes and keep their application alive in the queue. Three items of feedback for their consideration are: 1) Integration tests should be running as a gate job with OpenStack CI tools, using devstack as a bootstrap. 2) Host the source in gerrit (review.openstack.org) so that all components and tests are well-understood. 3) Better integration with the rest of the community, using more patterns of communication and doing cross-project liaison work.

We discussed the Kosmos project application, a very new project, formed initially from members of Designate and Neutron LBaaS, to provide global server load balancing for OpenStack clouds. A few of the TC members would prefer to see more evidence of their work, others think that the new definition of working like OpenStack should enable them to apply and be accepted.

We are thinking about the CloudKitty application and Juji Charms for Ubuntu application to OpenStack governance and will consider at the next TC meeting. As guidance for timing, we add motions presented before Friday 0800 UTC to the next Tuesday meeting agenda for discussion.

Cross-project Track

At the upcoming Mitaka summit, the community will have a dedicated track for cross-project discussion. The period for proposals is now open and it’ll be until October 9th. It’s possible to propose sessions on ODSREG. More info can be found on this thread.

OpenStack Community Weekly Newsletter (Sept., 12 – 18)

Running OpenStack? You have the power to influence the roadmap

Complete the User Survey by September 25

Call for Outreachy Mentors 

If you are a  full-time contributor, please consider sharing your time, knowledge and experience to make our community more diverse and you’ll have the opportunity to meet new talents. Ask for further directions in #OpenStack-opw on Freenode.

A starter guide to DefCore, OpenStack’s interoperability project

Rob Hirschfeld, co-chair of the DefCore committee, shares more on DefCore, which defines capabilities, code and must-pass tests, creating the minimum standards for products labeled OpenStack

The Road to Tokyo 

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications 

Security Advisories and Notices 

  • None this week

Tips ‘n Tricks 

Upcoming Events 

What you need to know from the developer’s list

PTL Nominations Are Over, Lets Start Elections!

  • Five projects don’t have candidates. According to OpenStack governance, the TC will appoint the new PTL [1].
    •   Barbican
    •   MagnetoDB
    •   Magnum
    •   Murano
    •   Security
  • Seven projects will have an election:
    •   Cinder
    •   Glance
    •   Ironic
    •   Keystone
    •   Mistral
    •   Neutron
    •   Oslo   
  •  There was confusion with UTC, and how to submit nominations through Gerrit, but the TC will work with those candidates in Magnum, Barbican, Murano, Security. 
  • Doug Hellmann says MagnetoDB will be discussed for removal due to inactivity.​ [1]

Proposed Priorities For Glance

  • From conversations at the Ops Midcycle meetup and email threads with regards to Glance issues, Doug Hellmann put together a list of proposed priorities for the Glance team:
    • Focus attention on DefCore:
      • DefCore goals: Ensure all OpenStack deployments are interoperable at REST level (users can write software for one OpenStack cloud and move to another without changes to the code).
      • Provide a well documented API with arguments that don’t change based on deployment choices.
      • Integration tests in Tempest that test Glance’s API directly, in addition to the the current tests that proxy through Nova and Cinder.
      • Once incorporated into DefCore, the APIs need to remain stable for an extended period of time, and follow deprecation timelines defined by complete V2 adoption in Nova and Cinder.
    • In Nova, some specs didn’t land in Liberty. Both teams need to work together.
    • In Cinder, the work is more complete, but needs to be reviewed that the API is used correctly.
    • Security audits and bug fixes
      • 5 out of the 18 recent security reports were related to Glance [2]
  • Two ways to upload images to Glance V2:
    • POST image bits to Glance API server.
      • Not widely deployed. Potential DOS vector.
    • Task API, to have Glance download it asynchronously.
      • Not widely deployed.
      • Assumes you know what task “types” are supported by which cloud, and the expected arguments (i.e. JSON blob). (e.g. Glance docs give a url for a source, but Rackspace gives a Swift location as a source).

New Proposed ‘default’ network model

  • Monty Taylor hates floating IPs.
    • Observed with 5 public clouds, requiring you to use a floating IP to get an outbound address. Others directly attach you to the public network.
    • Some allow you to create a private network and attach virtual machines to it, create a router with a gateway.
  •  Monty wants an easier way to have a virtual machine on the external facing network of a cloud. Users shouldn’t have to learn about how to make that work with floating tips. This should be consistent behavior across public clouds. There is an effort set for Mitaka to work on Monty’s request [3]. This will be done for ‘nova boot’ and work with multiple networks.
    •  If you have a more complicated network setup, this spec isn’t for you.

 Base Feature Deprecation Policy

  • Thierry Carrez proposes a standard way to communicate and perform removal of user-visible behaviors and capabilities.
    • We sort of have something today, but not written of “to remove a feature, you mark it deprecated for n releases, then remove it”.
    • Tag proposed [4].
  • We need to survey existing projects to see what their deprecation policy is.
  • Proposed options for deprecation period:
    • n+2 for features and capabilities, n+1 for config options
    • n+1 for everything
    • n+2 for everything
  • Ben Swartzlander thinks this discussion also needs to cover long term support (LTS).
    • Fungi thinks this is premature. Icehouse stable branch made it to 14 months before it was dropped due to not enough effort was given to keep it working.
  • It was agreed “config options and features will have to be marked deprecated for a minimum of one stable release branch and a minimum of 3 months”.​

team:danger-not-diverse tag

  • Josh Harlow is concerned that most projects start off small and not diverse, and this tag [5] would create negative connotations for those projects.
  • Thierry raises it’s important to see the intent of the tag, rather by it’s name.
    • The tag system is there to help our ecosystem navigate the big tent by providing bits of information.
    • Example of information: how risky is it to invest on a given project?
      • Some projects are dependent on a single company and can disappear in one day by the CEO’s decision.
  • For this reason, Thierry supports describing project teams that are *extremely* fragile.
  • As a result, the big tent is more inclusive. On the flip side, we need to inform our ecosystem that some project are less mature. Otherwise, you’re hiding this information.

[1] – http://lists.openstack.org/pipermail/openstack-dev/2015-September/074837.html 

[2] – https://security.openstack.org/search.html?q=glance&check_keywords=yes&area=default

[3] – https://blueprints.launchpad.net/neutron/+spec/get-me-a-network

[4] – https://review.openstack.org/#/c/207467/

[5] – https://review.openstack.org/#/c/218725/

Other News 

OpenStack Reactions

 

When people say they have a full, Active:Active, HA OpenStack deployed

 

Using logstash.openstack.org and unit test logs to hunt down race conditions that are blocking the gate

OpenStack Community Weekly Newsletter (Sept., 5 – 11)

Why you should take TryStack for a spin now

The free OpenStack testing sandbox is back — and it’s bigger, badder and better than ever.

Liberty cycle retrospective in Puppet OpenStack

Things are moving very fast in OpenStack; it might be useful to take a short break and write down a little bit of retrospective; it will help to see what happened in Puppet OpenStack project during the last months.

The Road to Tokyo 

Reports from Previous Events 

  • None this week

Deadlines and Contributors Notifications 

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

Other News 

OpenStack Reactions

When the first patch of a big new feature finally merges after months of work!

When the first patch of a big new feature finally merges after months of work!

The weekly newsletter is a way for the community to learn about all the various activities in the OpenStack world.

OpenStack Community Weekly Newsletter (Aug., 29 – Sept., 4)

The Road to Tokyo 

Reports from Previous Events 

Deadlines and Contributors Notifications 

Security Advisories and Notices 

Tips ‘n Tricks 

Upcoming Events 

Other News 

OpenStack Reactions

Ceilometer looking to consume messages from the queue

Ceilometer looking to consume messages from the queue

The weekly newsletter is a way for the community to learn about all the various activities in the OpenStack world.