The OpenStack Blog

Category: Partner

OSCON Schedule for OpenStack

For OSCON attendees looking to learn more about the OpenStack community or technology, here are the recommended sessions:

Speaking Sessions, Wednesday, July 27

Introduction to OpenStack, Eric Day
Wednesday, 1:40 pm

Using OpenStack APIs, Present and Future, Mike Mayo
Wednesday, 4:10 pm

OpenStack Fundamentals Training Part 1, Swift, John Dickinson
Wednesday, 4:10 pm

OpenStack Fundamentals Training Part 2, Nova, Jason Cannavale
Wednesday, 5:00 pm

OpenStack One-Year Anniversary Party, Spirit of 77
Wednesday, 7-9 pm

Speaking Sessions, Thursday, July 28

Prying Open the Cloud with Dell Crowbar and OpenStack, Joseph George, Rob Hirschfeld
Thursday, 10:40 am

OpenStack + Ceph, Ben Cherian, Jonathan Bryce
Thursday, 1:40 pm

Achieving Hybrid Cloud Mobility with OpenStack and XCP, Paul Voccio, Ewan Mellor
Thursday, 2:30 pm

Building and Maintaining Image Templates for the Cloud

As IaaS platforms like OpenStack gain traction in delivering compute and storage resources on demand, we’re seeing telco and enterprise IT customers increasingly focus on “software on demand”. Typically existing software delivery processes are too lengthy to take full advantage of the “instant-on” nature of the cloud. End users need to be able choose and instantly provision software and applications, then decommission them when no longer required. Consequently this raises many questions: how do I get apps onto an IaaS platform? how do I ensure software governance if somebody else is providing the apps as part of an app store? how do I build and maintain software images through time, making cloud deployments predictable and consistent?

Today, software images are often built manually, making them difficult to update and maintain over time. Cloud users are now realizing the need to work with transparent image templates which enable them to trace individual software components, versions and licenses. They are combining these templates with automated software delivery processes, using APIs to industrialize image creation and maintenance. This enables them to easily track components, add or update software automatically, and generate to one or many clouds. The image remains consistent and predictable, whether it’s used only within a private cloud or as part of a hybrid model for bursting into Amazon, for example.

Customers can take different approaches to building and deploying these images. Firstly, the devops model uses a base image that contains only basic packages to boot the OS. Once it’s installed, a phone home feature enables the devops platform to install all the packages that a particular service needs. This is a great model for many customers. However, others are realizing that it takes a long time to stand up the service, if you’re installing say 400 VMs from an outside repository. In a cloud model, you’re also paying for bandwidth and other resources over that time.

Secondly, customers can use more complete images that include not only the OS packages but also middleware and applications. They can then combine these with a devops platform for configuration, which is a very flexible way to push configuration information without some of the disadvantages we discussed earlier.

Finally, there are “fully baked”, self-contained images that include all the software components, as well as configuration logic. These images can be used to turn a specific solution on within a private or public cloud without a great deal of expertise and are often used by ISVs for quickly ramping up POCs, for example, at a customer site.

Whichever approach you take, it’s essential to remember that all three approaches require to control and maintain the base image over time. You must be able to track the packages and components you’re using, even in a devops base image, otherwise you’ll quickly end up with scores of unmanageable images.

Easy traceability and maintenance will also help you transition to the next phase of cloud software deployments: moving from a monolithic image for small deployments or test purposes to multi-tier images for pre-production and production deployments on a larger scale. You’ll be able to more easily piece together multiple VMs, provision, maintain and decommission complete solutions over multiple cloud nodes.

James Weir

CTO, UShareSoft

OpenStack Celebrates a Successful First Year

A year into the life of OpenStack, it seems like its success should have been more obvious.  The market lacked an open platform designed specifically for building and managing a cloud.  We knew that fact at Rackspace because we had been forced to build our own solution.  For five years we looked for off the shelf technologies that could power our public cloud but never found an acceptable solution.  So we kept building our own proprietary technology.  But that wasn’t the right answer.  As a company, we had always relied on standardized technologies to power our offers.  Technologies that our customers were also running in their own data centers.  But in cloud, such standards did not exist and were nowhere in sight.  Certainly, the ones that were emerging were not completely open.  And by building our own solution — one not available to anyone else — we weren’t actually helping to solve the problem.  So we decided to open source our technology, and make it available for use by our competitors and customers alike.  What we didn’t know was whether anyone else saw the world as we did.

A year later, its obvious we weren’t alone.  Consider these stats:

  • We grew from 2 organizations to 89
  • We grew from a couple dozen developers to nearly 250 unique contributors in the Cactus release and over 1,200 in the development community
  • Over 35,000 downloads from Launchpad and thousands more from our ecosystem
  • The scope of the project has truly evolved into a cloud operating system, tackling a diverse range of cloud infrastructure needs such as networking, load balancers and database.
  • Our initial conference and design summit had over 100 people, while the last in April hosted over 450
  • We have delivered 3 major releases and are halfway to the fourth
  • 17 countries have active participants and user groups now exist on 5 continents

One of the key reasons OpenStack has been successful is that it has such an audacious mission — to build an operating system to power both public and private clouds.  We believe that while public and private clouds do have different requirements, much of the core need is shared.  Things such as basic management, self-service and scalability.   OpenStack started with the large scale cloud expertise of Rackspace and NASA and has since added a wealth of knowledge from a who’s who list contributors with broad-ranging enterprise and service provider expertise.  All of these participants recognize that in order for the promise of cloud to be realized — for workloads to seamlessly migrate from one environment to another — a common platform is required inside the enterprise DC as well as the public cloud.  The technology should also be purpose-built for cloud, rather than a bolt-on to existing server virtualization technologies.  And that solution should be open and controlled by a vast community rather than a single vendor.

The shared community desire for an open cloud operating system powering both public and private clouds has resulted in a flurry of activity around OpenStack.  Consider the following:

  • Major enterprise software companies such as Citrix and Canonical, as well as startups such as StackOps, have announced commercial distributions of OpenStack.  This is a very key development for enterprise adoption.
  • Reference hardware architectures from the likes of Dell, Cisco, Intel and AMD for OpenStack.
  • The contributions from service providers and announcement of public clouds powered by OpenStack including Rackspace, Internap, Dreamhost, Dell, Korea Telecom, Memset and Nephoscale among others.
  • Support for OpenStack deployments by the likes of Cloudscaling, Cybera and Rackspace Cloud Builders.
  • Deployment support from Puppet Labs and Opscode.
  • A host of tools and software integration from scores of companies including Scalr, Rightscale, FathomDB, enStratus, and many others.
  • Venture funding and M&A activity have picked up in the community, including the recent funding of Piston and the acquisition of by Citrix (both OpenStack community members).

Most importantly, enterprises are really beginning to deploy OpenStack.  It wasn’t until the Cactus release in April that OpenStack truly became ready for production deployments.  But during the 3 months since that release, the number of companies deploying the technologies is truly remarkable.  Expect to see many of these stories coming to light in the next few months.

Thank you to everyone who has made OpenStack happen over the last year!  It has been an incredibly rewarding experience to be part of such an engaged and diverse community committed to the goal of an open cloud operating system.  Happy first birthday to all!

OpenStack Boston 2011 Events Sponsorship Packages Available

The sponsorship packages are now available for OpenStack eco-system partners looking to take part in supporting the community at the upcoming OpenStack Design Summit and OpenStack Conference from Oct 3 – 7, 2011.

Sponsorship Package Webinar – (30 minute presentation on the packages)

Sponsorship Package Slides from Webinar -

Sponsorship Package Prospectus –

If you have any questions on these packages please contact me so we can create a package that best meets your needs.


Clustered LVM on DRBD resource in Fedora Linux

(Crossposted from Mirantis Official Blog)

As Florian Haas has pointed out in my previous post’s comment, our shared storage configuration requires special precautions to avoid corruption of data when two hosts connected via DRBD try to manage LVM volumes simultaneously. Generally, these precautions concern locking LVM metadata operations while running DRBD in ‘dual-primary’ mode.

Let’s examine it in detail. The LVM locking mechanism is configured in the [global] section of /etc/lvm/lvm.conf. The ‘locking_type’ parameter is the most important here. It defines which locking LVM is used while changing metadata. It can be equal to:

  • ’0′: disables locking completely – it’s dangerous to use;
  • ’1′: default, local file-based locking. It knows nothing about the cluster and possible conflicting metadata changes;
  • ’2′: uses an external shared library and is defined by the ‘locking_library’ parameter;
  • ’3′: uses built-in LVM clustered locking;
  • ’4′: read-only locking which forbids any changes of metadata.

The simplest way is to use local locking on one of the drbd peers and to disable metadata operations on another one. This has a serious drawback though: we won’t have our Volume Groups and Logical Volumes activated automatically upon creation on the other, ‘passive’ peer. The thing is that it’s not good for the production environment and cannot be automated easily.

But there is another, more sophisticated way. We can use the Linux-HA (Heartbeat) coupled with the LVM Resource Agent. It automates activation of the newly created LVM resources on the shared storage, but still provides no locking mechanism suitable for a ‘dual-primary’ DRBD operation.

It should be noted that full support of clustered locking for the LVM can be achieved by the lvm2-cluster Fedora RPM package stored in the repository. It contains the clvmd service which runs on all hosts in the cluster and controls LVM locking on shared storage. In this case, we have only 2 drbd-peers in the cluster.

clvmd requires a cluster engine in order to function properly. It’s provided by the cman service, installed as a dependency of the lvm2-cluster (other dependencies may vary from installation to installation):

(drbd-node1)# yum install clvmd
Dependencies Resolved

Package Arch Version Repository Size
lvm2-cluster x86_64 2.02.84-1.fc15 fedora 331 k
Installing for dependencies:
clusterlib x86_64 3.1.1-1.fc15 fedora 70 k
cman x86_64 3.1.1-1.fc15 fedora 364 k
fence-agents x86_64 3.1.4-1.fc15 updates 182 k
fence-virt x86_64 0.2.1-4.fc15 fedora 33 k
ipmitool x86_64 1.8.11-6.fc15 fedora 273 k
lm_sensors-libs x86_64 3.3.0-2.fc15 fedora 36 k
modcluster x86_64 0.18.7-1.fc15 fedora 187 k
net-snmp-libs x86_64 1:5.6.1-7.fc15 fedora 1.6 M
net-snmp-utils x86_64 1:5.6.1-7.fc15 fedora 180 k
oddjob x86_64 0.31-2.fc15 fedora 61 k
openais x86_64 1.1.4-2.fc15 fedora 190 k
openaislib x86_64 1.1.4-2.fc15 fedora 88 k
perl-Net-Telnet noarch 3.03-12.fc15 fedora 55 k
pexpect noarch 2.3-6.fc15 fedora 141 k
python-suds noarch 0.3.9-3.fc15 fedora 195 k
ricci x86_64 0.18.7-1.fc15 fedora 584 k
sg3_utils x86_64 1.29-3.fc15 fedora 465 k
sg3_utils-libs x86_64 1.29-3.fc15 fedora 54 k


Transaction Summary
Install 19 Package(s)

The only thing we need the cluster for is the use of clvmd; the configuration of cluster itself is pretty basic. Since we don’t need advanced features like automated fencing yet, we specify manual handling. As we have only 2 nodes in the cluster, we can tell cman about it. Configuration for cman resides in the /etc/cluster/cluster.conf file:

<?xml version="1.0"?\>
<cluster name="cluster" config_version="1"\>
  <!-- post_join_delay: number of seconds the daemon will wait before
        fencing any victims after a node joins the domain
  post_fail_delay: number of seconds the daemon will wait before
        fencing any victims after a domain member fails
  clean_start    : prevent any startup fencing the daemon might do.
        It indicates that the daemon should assume all nodes
        are in a clean state to start. --\>
  <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
   <clusternode name="drbd-node1" votes="1" nodeid="1">
    <!-- Handle fencing manually -->
     <method name="human">
      <device name="human" nodename="drbd-node1"/>
   <clusternode name="drbd-node2" votes="1" nodeid="2">
    <!-- Handle fencing manually -->
     <method name="human">
      <device name="human" nodename="drbd-node2"/>
  <!-- cman two nodes specification -->
  <cman expected_votes="1" two_node="1"/>
  <!-- Define manual fencing -->
   <fencedevice name="human" agent="fence_manual"/>

clusternode name should be a fully qualified domain name and should be resolved by DNS or be present in /etc/hosts. Number of votes is used to determine quorum of the cluster. In this case, we have two nodes, one vote per node, and expect one vote to make the cluster run (to have a quorum), as configured by cman expected attribute.

The second thing we need to configure is the cluster engine (corosync). Its configuration goes to /etc/corosync/corosync.conf:

compatibility: whitetank
totem {
  version: 2
  secauth: off
  threads: 0
  interface {
    ringnumber: 0
    mcastport: 5405
logging {
  fileline: off
  to_stderr: no
  to_logfile: yes
  to_syslog: yes
  # the pathname of the log file
  logfile: /var/log/cluster/corosync.log
  debug: off
  timestamp: on
logger_subsys {
  subsys: AMF
  debug: off
amf {
  mode: disabled

The bindinetaddr parameter must contain a network address. We configure corosync to work on eth1 interfaces, connecting our nodes back-to-back on 1Gbps network. Also, we should configure iptables to accept multicast traffic on both hosts.

It’s noteworthy that these configurations should be identical on both cluster nodes.

After the cluster has been prepared, we can change the LVM locking type in /etc/lvm/lvm.conf on both drbd-connected nodes:

global {
  locking_type = 3

Start cman and clvmd services on drbd-peers and get our cluster ready for the action:

(drbd-node1)# service cman start
Starting cluster:
Checking if cluster has been disabled at boot... [ OK ]
Checking Network Manager... [ OK ]
Global setup... [ OK ]
Loading kernel modules... [ OK ]
Mounting configfs... [ OK ]
Starting cman... [ OK ]
Waiting for quorum... [ OK ]
Starting fenced... [ OK ]
Starting dlm_controld... [ OK ]
Unfencing self... [ OK ]
Joining fence domain... [ OK ]
(drbd-node1)# service clvmd start
Starting clvmd:
Activating VG(s): 2 logical volume(s) in volume group "vg-sys" now active
2 logical volume(s) in volume group "vg_shared" now active
[ OK ]

Now, as we already have a Volume Group on the shared storage, we can easily make it cluster-aware:

(drbd-node1)# vgchange -c y vg_shared

Now we see the ‘c’ flag in VG Attributes:

(drbd-node1)# vgs
VG        #PV #LV #SN Attr    VSize   VFree
vg_shared   1   3   0 wz--nc  1.29t   1.04t
vg_sys      1   2   0 wz--n-  19.97g  5.97g

As a result, Logical Volumes created in the vg_shared volume group will be active on both nodes, and clustered locking is enabled for operations with volumes in this group. LVM commands can be issued on both hosts and clvmd takes care of possible concurrent metadata changes.

OpenStack Nova: basic disaster recovery

We have published a new blog post about handling basic issues and virtual machines recovery methods. From the blog:

Today, I want to take a look at some possible issues that may be encountered while using OpenStack. The purpose of this topic is to share our experience dealing with the hardware or software failures which definitely would be faced by anyone who attempts to run OpenStack in production.

Read the complete blog at interviews Citrix and Dell on OpenStack

Direct from the site, comes a great video with Sameer Dholakia from Citrix and Joseph George from Dell on OpenStack and Project Olympus.

Citrix Synergy 2011 – Interview with Sameer Dholakia and Joseph George about OpenStack from VMblog on Vimeo.

OpenStack’s Big Day at Citrix Synergy

Today at Citrix Synergy, the OpenStack project received another big boost with the announcement of Citrix Project Olympus. From the project’s website:

Leveraging OpenStack, Project Olympus delivers the next generation in cloud computing – a scalable, flexible, open-by-design cloud solution that enables service providers and enterprise alike to build their own cloud services.

The early access program for Project Olympus gives users:

  • Citrix tested, certified and supported version of OpenStack
  • A cloud-optimized version of XenServer
  • Access to hardware
  • Personalized design, engineering, training and support services

The press release on this announcement from Citrix is available here. Feedback from the broader community on this announcement has been incredibly positive and I wanted to share a few blog posts:

Mirantis: OpenStack Deployment on Fedora using Kickstart

The team at Mirantis published a new blog post today on deploying OpenStack on Fedora ( From the blog:

In this article, we discuss our approach to performing an Openstack installation on Fedora using our RPM repository and Kickstart. When we first started working with OpenStack, we found that the most popular platform for deploying OpenStack was Ubuntu, which seemed like a viable option for us, as there are packages for it available, as well as plenty of documentation. However, our internal infrastructure is running on Fedora, so instead of migrating the full infrastructure to Ubuntu, we decided to make OpenStack Fedora-friendly. The challenge in using Fedora, however, is that there aren’t any packages, nor is there much documentation available. Details of how we worked around these limitations are discussed below.

The complete blog post with detailed step by step directions at

SMEStorage supports OpenStack Object Storage in Open Cloud Platform

A new OpenStack ecosystem participating company announced on Monday support for OpenStack Object Storage in their Open Cloud SaaS Platform and Cloud Appliance. Step by step directions with screen shots of this integration is available at A follow-up blog post details how to leverage a S3 API abstraction to access OpenStack Object Storage, Be sure to check out their solution and support our ecossytem solutions.

Back to top