OpenStack Governance Elections Spring 2012

The time is once again upon us for our OpenStack Governance Elections. The OpenStack community is called to elect the Project Technical Leads and two seats of the Project Policy Board. The election committee is made of Stefano Maffulli, Lloyd Dewolf and Dave Nielsen.

  • February 16 – 26 11:59 PST: Nominations open.
  • February 28 – March 3 11:59 PST: Online voting open.
  • March 3 11:59 PST: Voting closed.

Final results will be posted immediately upon election close.

What seats are up for election

  • NOVA Project Team Lead (1 Position)
  • SWIFT Project Team Lead (1 Position)
  • GLANCE Project Team Lead (1 Position)
  • HORIZON Project Team Lead (1 Position)
  • KEYSTONE Project Team Lead (1 Position)
  • Project Policy Board (2 Open Positions)

How to nominate yourself or others as Project Technical Lead

Only OpenStack community members who have code in the respective OpenStack subproject are eligible to be elected as that subproject’s Project Team Lead. Please nominate someone from the developer community or yourself at http://etherpad.openstack.org/Spring2012-Nominees under the Nominees heading.  Please provide the name and email address of the nominee. The election committee will then confirm with the nominee that they are willing to run for the position.The list of Approved Candidates will be announced with a new blog post on openstack.org/blog when online voting opens (Feb 28).

How to nominate yourself or others as member of the Project Policy Board

Any registered member of the OpenStack Launchpad group is eligible to run or be nominated for a position on the Project Policy Board. If you want to vote and/or run for a seat you need to register on Launchpad and add yourself to the public OpenStack group on https://launchpad.net/~openstack. Please nominate someone from the community or yourself at http://etherpad.openstack.org/Spring2012-Nominees under the Nominees heading. Please give the name and email address of the nominee. The election committee will then confirm with the nominee that they are willing to run for the position. The list of Approved Candidates will be announced with a new blog post on openstack.org/blog right before the election starts.

How to register to vote for PTL

Only OpenStack community members who have code in the respective OpenStack subproject are eligible to vote for that subproject’s Project Team Lead.  The authoritative list of eligible voters and nominees is the Authors file in each repository. For example, the list of Nova authors is https://github.com/openstack/nova/blob/master/Authors.
Make sure your name and correct email address is there or you won’t be able to vote.

How to register to vote for Project Policy Board

Any registered member of the OpenStack Launchpad group is eligible to vote for the Project Policy Board. If you want to vote you need to register to Launchpad and add yourself to the public OpenStack group on https://launchpad.net/~openstack before registering as a voter using the form at http://ppbelectionsregistration.openstack.org/. Company affiliation is only collected as an interesting statistic; it has no effect on the outcome of the election.

Voting process

Like previous OpenStack Governance Elections, we will use the Condorcet Internet Voting Service from Cornell University, http://www.cs.cornell.edu/andru/civs.html. This tool uses the Condorcet method of voting which invokes ranking the nominees instead of just selecting one choice. More information on this methodology is at http://www.cs.cornell.edu/w8/~andru/civs/rp.html.

All registered voters will receive an email with a unique link allowing them to privately vote.

Please note that the voting system is run using private polls with restricted access to ensure voter authenticity; however all results will be made public once the election ends. Voter anonymity is guaranteed. The result’s ranking will be evaluated using Schulze (also known as Beatpath or CSSD) completion rule.

Thanks for participating in this essential process. Please remind your friends and colleagues to get involved, register and vote!

Tags:

Community Weekly Review (Feb 3-10)

OpenStack Community Newsletter –February 10, 2012

HIGHLIGHTS

EVENTS

OTHER NEWS

COMMUNITY STATISTICS

  • The charts below represent the work done on bugs during Bug Squashing Day Feb 2 2012.
  • Bug Squashing Day Feb 2 2012 - Results for project: Horizon
  • Bug Squashing Day Feb 2 2012 - Results for project: Keystone

This weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

Tags:

Recap: “Ceph Lords” OpenStack SF Meetup Feb 02

OpenStack SF Meetup Ceph

On February 2nd, a league of extraordinary gentlemen gathered inside a crowded chamber for the “Ceph LordsOpenStack SF Meetup.  About 75 Stackers were in attendance making this event a smashing success!

If you missed this Meetup, then you should watch the recorded presentations in order to experience the lively discourse.  Also, check out the photos that were taken by the Piston Cloud crew.  They did a good job capturing the fun, relaxed ambiance of the gathering.

The Meetup was facilitated by Piston Cloud and hosted by DreamHost.  Scrumptious tacos were catered by Tacolicious, and Magnolia Brewery provided a keg of ice-cold California Kolsh beer.

Ceph was in the spotlight throughout the Meetup.  For those few who are still unfamiliar with Ceph, it’s a massively scalable, open source, distributed storage system.  The presenters focused primarily on how Ceph works with the cloud software stack, and how it’s currently being implemented in production.

The DreamHost team kicked off the event.  Ben Cherian provided a brief overview on the reasons why DreamHost chose to use Ceph as the storage foundation for their current hosting products and upcoming cloud services.

Tommi Virtanen dived into the Ceph platform from a technical perpective.  He described the major components of Ceph, its storage architecture, and how it distributes data.

Carl Perry talked about how DreamHost is currently deploying Ceph.  He touched on specifics such as the hardware and tools involved, the level of automation, and things learned along the way.

Christopher MacGown, Co-Founder and CTO of Piston Cloud, was the final presenter.  He opened with how storage should work in a cloud environment and why Piston Cloud chose Ceph as the backend storage solution.  He finished by describing how his company is using Ceph with the Piston Enterprise OS™ software.

OpenStack SF Meetup Ceph

 

Written by: Brent Scotten

Automating Openstack Testing on Ubuntu

(Original Post)

During the Ubuntu precise development cycle the Canonical Platform Server Team have been working on automating testing of Openstack on Ubuntu.

The scope of this work was:

  1. Per-commit testing of Openstack trunk to evaluate the current state of the upstream codebase in-conjunction with the current packaging in Ubuntu precise and the current Juju charms to deploy Openstack.
  2. SRU testing for Openstack Diablo on Ubuntu 11.10.

Openstack do a lot of pre-commit testing through the use of gerrit with Jenkins; we wanted to supplement this with Ubuntu focused testing to provide another dimension to the testing already completed upstream.

So grab a coffee and make yourself comfortable; this is not a short read….

Lab Setup

The Ubuntu Openstack QA lab consists of 12 servers; the primary server in the solution is an Ubuntu 11.10 install providing the following functions:

  1. Juju – used to deploy Openstack charms in the Lab
  2. Cobbler to support server provisioning (using the Ubuntu Orchestra packages in Oneiric)
  3. Jenkins CI – provides triggering based on upstream commits to github repositories and general job control and reporting.
  4. Schroots for Oneiric and Precise for building packages locally
  5. A reprepro managed local archive for Oneiric and Precise
  6. Squid based archive caching to reduce installation times in the lab

This server also acts at the gateway into and out of the Lab (it’s setup as a NAT router).

The other 11 servers are registered in Cobbler; All servers are connected to a Sentry CDU (Cabinet Distribution Unit) which allows full power control from Cobbler – thanks goes to Andres Rodriguez for developing the required fence component for Cobbler to support this type of CDU.

Preseeded LVM Snapshot Installs

To initiate a new integration test run requires all machines to be powered down and re-provisioned from scratch.  It is essential that our deployment and test runs can cope the frequency of upstream commits, particularly as the frequency increases as Openstack approaches milestones and releases.   After getting the initial lab setup in place, we were able to tear down all machines, re-provision and deploy Openstack in ~30mins.

It was important that we are able to minimize the time taken to complete the testing cycle.   To do so, we’ve employed the use of LVM snapshotting and restoration of the root partition during the the netboot installation.   The process is as follows:

  1. Test run begins
  2. Juju deploys a service (i.e. nova-compute)
  3. A machine is netbooted and a preseeded LVM-based Ubuntu installation takes place onto /dev/qalab/root
  4. At the end of the installation, the root filesystem is moved to /dev/qalab/pristine-[release]-root and a snapshot created at /dev/qalab/root
  5. The machine reboots, runs Juju and deploys nova-compute as pat of the rest of the Openstack deployment. This deployment is smoke tested.
  6. The next test run begins.  All machines are terminated. Juju redeploys nova-compute, a machine is netbooted and Ubuntu installation kicks off.
  7. The installation checks for the existence of a logical volume at /dev/qalab/pristine-[release]-root.  If it exists, it creates a new snapshot at /dev/qalab/root and reboots. If it does not, continues with installation and goto step 4.
  8. System reboots, Juju installs and redeploys nova-compute to a fresh Ubuntu installation.

This process takes place on all nodes in parallel.  With it in place, we were able to cut down the time it took to tear-down and re-provision a node from ~30 minutes to 10 to 15 minutes depending on the service being deployed.

By taking this approach we are also minimize the chance of any nodes hitting an archive inconsistency during installation. This is a known issue when deploying the development release and halts installation on any node that hits it, failing the entire deployment.

All of this is embedded in debian-installer preseeds via Cobbler snippets.  The snippets and kick starts are available at lp:~openstack-ubuntu-testing/+junk/cobbler-lvm-snapshot.

In the future, we’ll be investigating the use of kexec as an alternative to reboot after snapshot restoration to reduce the time spent waiting on servers to boot.  This should minimize the test cycle even more. Credit to James Blair for the idea (see http://amo-probos.org/post/11).

Management of Jenkins

All of the projects in Jenkins are managed using Jinja2 XML templates in-conjunction with python-jenkins (python-jenkins); this makes it really easy to setup new jobs in the lab and reconfigure existing ones as required (as well as providing great backup!).

Templates and management scripts can be found in lp:~openstack-ubuntu-testing/+junk/jenkins-qa-lab

Testing Openstack Essex on Ubuntu Precise

This testing was the first to be setup in the lab.  Jenkins (using the git plugin) monitors the upstream github.com repositories for commits on the master branch.  When a change is detected the following process is triggered:

Build

Objective: Validate that upstream trunk still builds OK with current packaging for Ubuntu.

  1. A new snapshot upstream tarball is generated based on the latests commit to the upstream component.
  2. The latest archive packaging for the component is pulled in from lp:~ubuntu-server-dev/<COMPONENT>/essex
  3. Any changes in the testing packaging for the component are merged from lp:~openstack-ubuntu-testing/<COMPONENT>/essex
  4. New changelog entries are automatically created for the new upstream commits.
  5. The source package is generated and built in a clean schroot using sbuild locally.

On the assumption that the package built OK locally:

  1. The source package is uploaded to the Testing PPA (ppa:openstack-ubuntu-testing/testing)
  2. The testing packaging branch is push back to lp:~openstack-ubuntu-testing/<COMPONENT>/essex.
  3. The binary packages from the sbuild are installed into the local reprepro managed archive.

This process is managed by a single script (tarball.sh); Credit to Chuck Short for pulling together this part of the process based on work from Openstack upstream.

For changes to the nova project the deploy phase is then executed.

Deploy

Objective: Validate that packages install, can be configured and reach a know good state prior to execution of testing.

This phase of testing uses Juju with Cobbler to deploy Openstack into the QA lab infrastructure; It utilizes branches of the Openstack charms to support use of a local archive along with a deployer wrapper around Juju written by Adam Gandelman which executes the actual deployment using Juju and monitors for errors.

The deployer is configured to know where to get the right codebase for the Openstack charms, which services to deploy and which relations to setup between services. As you can see from the above diagram this is non-trivial but the charms and Juju do most of the hard work.

Once Openstack is deployed successfully the test phase is then executed.

Test

Objective: Validate that the Openstack deployment in the lab actually works!

At this point, we can run any integration tests we wish against the newly deployed cloud.  This testing is able to help us achieve multiple goals:

  • Early detection of upstream bugs that break Openstack functionality on Ubuntu
  • Verification that packaging branches in the development version of Ubuntu are compatible with upstream trunk.
  • Using these packages, verification that our Juju charms are deploying a functional Openstack cloud and are up-to-date with any deployment-related configuration changes upstream.

At the moment this phase looks like this:

  1. Configure the Openstack deployment (Adams deployer script provides some utility functions for locating specific services in the environment)
    • Creates network configuration in Nova for the private instance network as well as a pool of public floating IPs.
    • Upload an image into the Glance server for use during testing
    • Creates EC2 credentials in the Keystone server for use during testing.
  2. Run the devstack exercise test scripts which ensure basic functionality of the deployment. Currently, this includes:
    • Basic euca-tools EC2 API for starting and stopping instances
    • EC2 AMI bundle uploads
    • Floating IP allocation, association and connectivity to instance
    • Volume creation and attachment to instance

Note: These are the same sets of tests that are currently run against proposed commits to gerrit upstream.

Longer term we aim to use the Openstack Tempest test suite in the lab; Adam is currently working on getting this up and running.

Reporting

The Jenkins instance in the QA lab is not publicly accessible; however all jobs run in the lab are published out (using the Jenkins build-publisher plugin) to http://jenkins.qa.ubuntu.com so that people can see the current state of the testing packaging in Ubuntu precise.

We are also working on setting up email notifications.

Success so far

Juju charms deploy Openstack components in a configuration that is compatible with upstream trunk prior to updates to packaging in Ubuntu.  Previously packages were updated in the archive first while Juju charm updates lagged behind as incompatibilities were uncovered after the fact.

We enabled automated testing 2 days prior to the 3rd Essex milestone release.  We were able to uncover and help fix a handful of bugs upstream before the release, including critical bugs like 921784.  In the past, these bugs were typical uncovered after the release (both upstream and in Ubuntu).

Since E3, there have been even more critical bugs uncovered by this testing and fixed upstream, some of which are only applicable to Ubuntu-specific configurations (not tested upstream) and would have been uncovered by users after code hit the Ubuntu archive (See 922232).

Further Plans for the Lab

Pre-commit  testing of changes to stable branches;  The Ubuntu Server team are  working upstream on maintaining the stable branches of released versions  of OpenStack – this work will validate patches proposed to stable  branches in review.openstack.org against the current version of the  packaging in released versions of Ubuntu.  Initially this will target  Diablo on Ubuntu 11.10 but will also support Essex on Ubuntu 12.04 once  released.  Ideally the testing process will provide feedback on  review.openstack.org to help the stable release team review proposed  patches.

References

Jenkins job configurations: lp:~openstack-ubuntu-testing/+junk/jenkins-qa-lab

Scripts supporting the lab: lp:~openstack-ubuntu-testing/+junk/jenkins-scripts

LVM snapshot preseeds and Cobbler snippets: lp:~openstack-ubuntu-testing/+junk/cobbler-lvm-snapshot

All other relevant scripts, charm branches, etc: https://code.launchpad.net/~openstack-ubuntu-testing/

Credits

Overall management of delivery and general whip cracking: Dave Walker

Lab installation and base configuration: Pete Graner, Tim Gardner, Brad Figg, James Page

Fence agent for network power control of servers: Andres Rodriguez

Source package creation and build process: Chuck Short and James Page

Deployment testing using Juju: Adam Gandelman

Testing of Openstack: Adam Gandelman

Jenkins packaging, configuration and management: James Page

Gerrit Plugin for pre-commit testing and generally great ideas: Monty Taylor and James Blair

Writing and reviewing this post: Adam Gandelman, Chuck Short and Dave Walker.

Tags:

OpenStack Party @ CloudConnect 2012

For those attending CloudConnect 2012 in Santa Clara –   Join stackers from all over the world at the OpenStack CloudConnect 2012 party at Fahrenheit Lounge, hosted by Mirantis, Rackspace and Cloudscaling.

Open bar, Hors D’oeuvres and music all night long. This is the place to be at CloudConnect on a Wednesday night.

We’ll have shuttle buses available every 30 minutes, traveling between Santa Clara Convention center parking lot and Fahrenheit Lounge starting at 8pm, immediately after the Cloudscaling cocktail reception.

Registration is first come, first serve and the space is limited. Visit openstackparty.eventbrite.com to register.

OpenStack Talk hosted by the Computer Society of India Pune Chapter

This is a guest post from Devdatta Kulkarni. Thanks Dev for sharing!

The Computer Society of India (CSI) Pune chapter organized an OpenStack talk with me, Racker Devdatta Kulkarni, on Saturday January 21, 2012 from 5.00 pm – 6.30 pm.

Sunset in Pune by flickr:yogendra174Approximately 35 people attended. The audience primarily consisted of people with a technical background. Technology professionals were the most represented category followed by college students, followed by researchers.

I divided my talk into two parts. In the first part, I touched upon the need for OpenStack, the project’s history and mission, and the current projects. In the second part I delved deeper into design and architectures of Nova, Swift, Glance, and Keystone, and concluded with information about how to participate in the community.

At the end of the talk I did a quick show of hands to find out how many attendees knew about OpenStack prior to the talk. Given that I saw only three hands in response, I think the talk certainly helped in raising the awareness of OpenStack within the technical community in Pune.

Here are some of the questions that came up at the talk. Anne Gentle wrote the answers for the questions and I want to share with the attendees as well as OpenStack blog readers.
Question 1) Performance benchmarks of OpenStack deployments. They have experimented with deployment of about 200 VMs and were seeing average VM creation time of about 20 minutes. They wanted to know if this was something expected. Also, they were wondering if there are any OpenStack performance benchmark results that can be shared with the community.
Anne: A 20 minute wait sounds like a long time to me for a single VM but a short time for 200 Vms. We haven’t found a good way to share performance benchmarks yet but a post to the mailing list would probably elicit responses. I’ve also seen John Dickinson talk to folks on IRC about their Object Storage benchmarks.

Question 2) Guidelines on topology. They wanted to know if there are any published guidelines regarding the optimal topology, such as number of glance servers, number of compute, volume, and network nodes in Nova deployments?
Anne: I’d recommend they take a look at http://referencearchitecture.org for both physical and logical architecture diagrams that show the number of servers and how to scale out a deployment.

Question 3) Active Directory support in Keystone. Is this being discussed within the Keystone working group?
Anne: It’s often discussed but no one has stepped up to write an AD plugin for Keystone yet that I know of.

Question 4) Is there a QEMU-based development environment for OpenStack?
Anne: Try out http://devstack.org and if you run it in a VM, it’ll use QEMU.

Question 5) Can you give pointers to learning material?
Anne: Each of the projects has a development docs site (nova.openstack.org, glance.openstack.org, swift.openstack.org, and so on). You’ll find API and admin docs at docs.openstack.org.

Tags:

OpenStack Design Summit & Conference Updates

We’re making progress on the next OpenStack Design Summit (April 16-18) & Conference (April 19-20) at the Hyatt Regency San Francisco — one week, two events.

Hotel Rooms
We have a discounted hotel room block at the Hyatt under OpenStack, which is now available to book. Please make sure to designate yourself with the OpenStack Design Summit & Conference when booking.

Sponsorship Prospectus
Sponsorships are going fast, and the prospectus is available to download at the Conference website. There are a limited number of opportunities at the top levels, which are first come, first serve, with a signed agreement. If you have any questions about the prospectus, please contact [email protected].

Speakers & OpenStack Demo Session
We’ve also opened the OpenStack Conference call for speakers. We need your help to build out an informative and compelling agenda, including user stories, technical advancements, best practices and visions on the future of OpenStack. New to this Conference, we are also planning an OpenStack demo session, a chance for companies building products around OpenStack to present in front of the community and a panel of judges. The deadline to submit speaking sessions is February 15, and more details and deadlines for the OpenStack demo session will be announced shortly.

As a reminder, the OpenStack Design Summit is made up of working sessions for developers contributing to OpenStack. The OpenStack Conference reaches a broader audience, including users and the business ecosystem, in addition to the OpenStack technical community. Because the events are co-located, the sponsorship prospectus and hotel room block cover both events, but the call for papers is strictly for the OpenStack Conference. The Design Summit sessions and schedule will be determined by blueprint submissions, the Project Technical Leads and Release Manager.

We encourage you to make travel arrangements for the April events, and registration will open shortly. Look forward to seeing everyone in San Francisco!

Community Weekly Review (Jan 20-27)

OpenStack Community Newsletter –January 27, 2012

HIGHLIGHTS

EVENTS

OTHER NEWS

COMMUNITY STATISTICS

  • We’re working to improve the community stats. We hope to be back next week.

This weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please leave a comment.

Tags:

OpenStack Jenkins dashboard available for testing Ubuntu snapshots

The keener eyed of you may have noticed:

https://jenkins.qa.ubuntu.com/view/Precise%20OpenStack%20Testing/

James Page has setup the jobs in the Ubuntu OpenStack QA Lab to start publishing to the public Jenkins QA instance this morning. We now have automated build testing of all core OpenStack components triggered from upstream trunk commits. This is followed by automated deployment (-deploy) of OpenStack in the lab with a serving of testing (-test) once its all up and running.

Credit to Adam Gandelman for the Juju charm work, deployment framework and test execution and to Chuck Short for the hugely misnamed tarball.sh script which completes the git/bzr/packaging fu to build and deploy OpenStack packages!

The plan is to get the upstream Tempest test suite running in the lab; at the moment we are running a more limited test script just to ensure that you can spin up and instance and see it on the network.

(Crossposted from cloud.ubuntu.com)

Tags:

OpenStack Melbourne Australia Meetup Jan 17

Openstack Melbourne Australia Meetup January 17 2012On Tuesday January 17 at the Exchange Hotel in Melbourne the inaugural Australian OpenStack Users Group meetup Part 2 took place. This followed up on the Sydney event last month and took the same format, being a casual informal get together for some drinks and conversation focused on OpenStack. We kicked off around 6pm and had an attendance of around 45 OzStackers. Many many thanks to everyone that came along!

Once again we had our attending vendors present a short overview of their company’s involvement in the project. The speakers were Mark Randall, Rackspace Country Manager for AU/NZ, Daniel PendleburyCitrix Lead Systems Engineer for Datacenter and Cloud, Gavin Coulthard, Manager – Field Systems Engineering A/NZ at F5, Peter Jung, Cloud Solutions Architect at Dell, and Andrew White, Data Centre Architect from Cisco. Following the vendors, an awesome contribution to the evening came from Dr Steven Manos, ITS Research Director from the University of Melbourne, who presented an overview of the NeCTAR project. Rounding out the talks again was Phil Rogers from Aptira.

Again as in Sydney, there was a great sense of community, lots of smiles and laughter and much conversation and enthusiasm to share information and experiences. As social events go, both this and the Sydney events have been very successful, the next round of meetups scheduled for early March will see us presenting a more structured meetup schedule with a focus on technical, with demos and the like.

Head to our Australian Meetup group to get involved, or join the AU Google group.

 

Tags: