Community Manager – Where Did He Go?

From June 10 – 19, I will be outside the country traveling to remote regions of the Caribbean with no access to the Internet or my mobile phone. During this time period, I will not be releasing the weekly community newsletter but I have a fellow community member assisting in the weekly data tracking to maintain statistics. If you have any issues related to becoming a participating company, weekly data or other community related question please contact Summer Fouche who is the Community Manager Intern this summer.

Image: Flickr.com

Tags:

Hold the Date – OpenStack Conference

The OpenStack Community requests that you mark your calendars as booked for October 5-7, 2011 for the OpenStack Conference in Boston, MA. With an opening reception on the evening of October 5 and two full days of all things OpenStack, this conference is the must attend event for open source enthusiasts, cloud computing technologists, and OpenStack participating company ecosystem partners.

More details soon to come from OpenStack on this event.

Tags:

OpenStack Social Media Survey Results

Over the past week, the OpenStack Community Management team has conducted an online Social Media Engagement survey to better understand the needs and wants of various community member types in relation to the information available. The survey results are available, SurveySummary_06082011 with no details on the participants who answered questions.

Based on these results, we are taking the following three actions to meet the needs of the community:

1. Result – OpenStack.org and the OpenStack presence on various social networks (especially Twitter and LinkedIn) are the primary information sources of most respondents with 60% looking to receive even more information on a regular basis.

    Action –  A social network plan will be drafted to better link the various OpenStack information repositories with community member information access points. This plan will drive content from OpenStack Slideshare, Vimeo, Flickr, OpenStack.org and other repositories to Twitter, LinkedIn, Facebook, etc. Be on the lookout for this plan as we intend to ask the broader community for feedback on the various options available in this plan.

    2. Result – Two community member types stood out in the survey: Participating company prospects and Active contributor prospects.

    Action – A “Getting Started with OpenStack” document is planned for publication with all the information on becoming a participating company, active participant, developer, etc. This document will be visible in many social networking sites including the OpenStack home page to provide critical information for these community prospects.

    3. Result – The OpenStack Forum is showing strength for people looking for information and answers to common OpenStack user questions.

    Action – New marketing and promotion initiatives will be started to significantly raise the awareness of the OpenStack Forum and help drive more participation.

    Thanks again for everyone that participated and if you have more thoughts on this issue, please contact Summer Fouche; or Stephen Spector.

    image:futurity.org

    Tags:

    Clustered LVM on DRBD resource in Fedora Linux

    (Crossposted from Mirantis Official Blog)

    As Florian Haas has pointed out in my previous post’s comment, our shared storage configuration requires special precautions to avoid corruption of data when two hosts connected via DRBD try to manage LVM volumes simultaneously. Generally, these precautions concern locking LVM metadata operations while running DRBD in ‘dual-primary’ mode.

    Let’s examine it in detail. The LVM locking mechanism is configured in the [global] section of /etc/lvm/lvm.conf. The ‘locking_type’ parameter is the most important here. It defines which locking LVM is used while changing metadata. It can be equal to:

    • ‘0’: disables locking completely – it’s dangerous to use;
    • ‘1’: default, local file-based locking. It knows nothing about the cluster and possible conflicting metadata changes;
    • ‘2’: uses an external shared library and is defined by the ‘locking_library’ parameter;
    • ‘3’: uses built-in LVM clustered locking;
    • ‘4’: read-only locking which forbids any changes of metadata.

    The simplest way is to use local locking on one of the drbd peers and to disable metadata operations on another one. This has a serious drawback though: we won’t have our Volume Groups and Logical Volumes activated automatically upon creation on the other, ‘passive’ peer. The thing is that it’s not good for the production environment and cannot be automated easily.

    But there is another, more sophisticated way. We can use the Linux-HA (Heartbeat) coupled with the LVM Resource Agent. It automates activation of the newly created LVM resources on the shared storage, but still provides no locking mechanism suitable for a ‘dual-primary’ DRBD operation.

    It should be noted that full support of clustered locking for the LVM can be achieved by the lvm2-cluster Fedora RPM package stored in the repository. It contains the clvmd service which runs on all hosts in the cluster and controls LVM locking on shared storage. In this case, we have only 2 drbd-peers in the cluster.

    clvmd requires a cluster engine in order to function properly. It’s provided by the cman service, installed as a dependency of the lvm2-cluster (other dependencies may vary from installation to installation):

    (drbd-node1)# yum install clvmd
    ...
    Dependencies Resolved

    ================================================================================
    Package Arch Version Repository Size
    ================================================================================
    Installing:
    lvm2-cluster x86_64 2.02.84-1.fc15 fedora 331 k
    Installing for dependencies:
    clusterlib x86_64 3.1.1-1.fc15 fedora 70 k
    cman x86_64 3.1.1-1.fc15 fedora 364 k
    fence-agents x86_64 3.1.4-1.fc15 updates 182 k
    fence-virt x86_64 0.2.1-4.fc15 fedora 33 k
    ipmitool x86_64 1.8.11-6.fc15 fedora 273 k
    lm_sensors-libs x86_64 3.3.0-2.fc15 fedora 36 k
    modcluster x86_64 0.18.7-1.fc15 fedora 187 k
    net-snmp-libs x86_64 1:5.6.1-7.fc15 fedora 1.6 M
    net-snmp-utils x86_64 1:5.6.1-7.fc15 fedora 180 k
    oddjob x86_64 0.31-2.fc15 fedora 61 k
    openais x86_64 1.1.4-2.fc15 fedora 190 k
    openaislib x86_64 1.1.4-2.fc15 fedora 88 k
    perl-Net-Telnet noarch 3.03-12.fc15 fedora 55 k
    pexpect noarch 2.3-6.fc15 fedora 141 k
    python-suds noarch 0.3.9-3.fc15 fedora 195 k
    ricci x86_64 0.18.7-1.fc15 fedora 584 k
    sg3_utils x86_64 1.29-3.fc15 fedora 465 k
    sg3_utils-libs x86_64 1.29-3.fc15 fedora 54 k

     

    Transaction Summary
    ================================================================================
    Install 19 Package(s)

    The only thing we need the cluster for is the use of clvmd; the configuration of cluster itself is pretty basic. Since we don’t need advanced features like automated fencing yet, we specify manual handling. As we have only 2 nodes in the cluster, we can tell cman about it. Configuration for cman resides in the /etc/cluster/cluster.conf file:

    <?xml version="1.0"?\>
    <cluster name="cluster" config_version="1"\>
      <!-- post_join_delay: number of seconds the daemon will wait before
            fencing any victims after a node joins the domain
      post_fail_delay: number of seconds the daemon will wait before
            fencing any victims after a domain member fails
      clean_start    : prevent any startup fencing the daemon might do.
            It indicates that the daemon should assume all nodes
            are in a clean state to start. --\>
      <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
      <clusternodes>
       <clusternode name="drbd-node1" votes="1" nodeid="1">
        <fence>
        <!-- Handle fencing manually -->
         <method name="human">
          <device name="human" nodename="drbd-node1"/>
         </method>
        </fence>
       </clusternode>
       <clusternode name="drbd-node2" votes="1" nodeid="2">
        <fence>
        <!-- Handle fencing manually -->
         <method name="human">
          <device name="human" nodename="drbd-node2"/>
         </method>
        </fence>
       </clusternode>
      </clusternodes>
      <!-- cman two nodes specification -->
      <cman expected_votes="1" two_node="1"/>
      <fencedevices>
      <!-- Define manual fencing -->
       <fencedevice name="human" agent="fence_manual"/>
      </fencedevices>
    </cluster>

    clusternode name should be a fully qualified domain name and should be resolved by DNS or be present in /etc/hosts. Number of votes is used to determine quorum of the cluster. In this case, we have two nodes, one vote per node, and expect one vote to make the cluster run (to have a quorum), as configured by cman expected attribute.

    The second thing we need to configure is the cluster engine (corosync). Its configuration goes to /etc/corosync/corosync.conf:

    compatibility: whitetank
    totem {
      version: 2
      secauth: off
      threads: 0
      interface {
        ringnumber: 0
        bindnetaddr: 10.0.0.0
        mcastaddr: 226.94.1.1
        mcastport: 5405
      }
    }
    logging {
      fileline: off
      to_stderr: no
      to_logfile: yes
      to_syslog: yes
      # the pathname of the log file
      logfile: /var/log/cluster/corosync.log
      debug: off
      timestamp: on
    logger_subsys {
      subsys: AMF
      debug: off
    }
    }
    amf {
      mode: disabled
    }

    The bindinetaddr parameter must contain a network address. We configure corosync to work on eth1 interfaces, connecting our nodes back-to-back on 1Gbps network. Also, we should configure iptables to accept multicast traffic on both hosts.

    It’s noteworthy that these configurations should be identical on both cluster nodes.

    After the cluster has been prepared, we can change the LVM locking type in /etc/lvm/lvm.conf on both drbd-connected nodes:

    global {
      ...
      locking_type = 3
      ...
    }

    Start cman and clvmd services on drbd-peers and get our cluster ready for the action:

    (drbd-node1)# service cman start
    Starting cluster:
    Checking if cluster has been disabled at boot... [ OK ]
    Checking Network Manager... [ OK ]
    Global setup... [ OK ]
    Loading kernel modules... [ OK ]
    Mounting configfs... [ OK ]
    Starting cman... [ OK ]
    Waiting for quorum... [ OK ]
    Starting fenced... [ OK ]
    Starting dlm_controld... [ OK ]
    Unfencing self... [ OK ]
    Joining fence domain... [ OK ]
    (drbd-node1)# service clvmd start
    Starting clvmd:
    Activating VG(s): 2 logical volume(s) in volume group "vg-sys" now active
    2 logical volume(s) in volume group "vg_shared" now active
    [ OK ]

    Now, as we already have a Volume Group on the shared storage, we can easily make it cluster-aware:

    (drbd-node1)# vgchange -c y vg_shared

    Now we see the ‘c’ flag in VG Attributes:

    (drbd-node1)# vgs
    VG        #PV #LV #SN Attr    VSize   VFree
    vg_shared   1   3   0 wz--nc  1.29t   1.04t
    vg_sys      1   2   0 wz--n-  19.97g  5.97g

    As a result, Logical Volumes created in the vg_shared volume group will be active on both nodes, and clustered locking is enabled for operations with volumes in this group. LVM commands can be issued on both hosts and clvmd takes care of possible concurrent metadata changes.

    Tags:

    OpenStack Nova: basic disaster recovery

    We have published a new blog post about handling basic issues and virtual machines recovery methods. From the blog:

    Today, I want to take a look at some possible issues that may be encountered while using OpenStack. The purpose of this topic is to share our experience dealing with the hardware or software failures which definitely would be faced by anyone who attempts to run OpenStack in production.

    Read the complete blog at http://mirantis.blogspot.com/2011/06/openstack-nova-basic-disaster-recovery.html.

    Summer of OpenStack

    Summer 2011 (or Winter 2011 for those south of the equator) is upon us and the OpenStack community is very active with appearances at a wide variety of global events. To keep you in the loop, here is a brief overview of those events currently on the calendar for OpenStack. Some of these events do not have published agendas but be sure that OpenStack is busy getting speakers signed up. If you aware aware of an event not on this list, please let me know so I can promote it. You can also follow our events at http://www.openstack.org/community/events/.

    Highlights

    • CloudCampNY featuring OpenStack breakout session and after party, Tuesday, June 7, 5:30 – 11:30 pm (http://www.cloudcamp.org/ny)
    • OpenStack planning evening event at Structure in San Francisco, CA. Want to get involved? Email [email protected]

    June 2011

    July 2011

    August 2011

    • Xen Summit North America (August 2-3) in Santa Clara, CA;
    • HostingCon (August 8-10) in San Diego, CA;
    • NASA IT Summit (August 15-17) in San Francisco;
    • LinuxCon North America (August 25-29) in Vancouver; OpenStack is a Bronze Sponsor

    Tags:

    Community Weekly Newsletter (May 27 – June 3)

    OpenStack Community Newsletter – June 3, 2011

    This weekly newsletter is a way for the community to learn about all the various activities occurring on a weekly basis. If you would like to add content to a weekly update or have an idea about this newsletter, please email [email protected].

    Paul Pettigrew, Mach Technology from OpenStack on Vimeo.

    HIGHLIGHTS

    EVENTS

    DEVELOPER COMMUNITY

    GENERAL COMMUNITY

    COMMUNITY STATISTICS (5/27– 6/2)

    • Data Tracking Graphs – http://wiki.openstack.org/WeeklyNewsletter
    • OpenStack Compute (NOVA) Data
      • 12 Active Reviews
      • 256 Active Branches – owned by 72 people & 15 teams
      • 1344 commits by 65 people in last month
    • OpenStack Object Storage (SWIFT) Data
      • 1 Active Reviews
      • 68 Active Branches – owned by 22 people & 6 teams
      • 137 commits by 15 people in last month
    • OpenStack Image Registry (GLANCE) Data
      • 2 Active Reviews
      • 21 Active Branches – owned by 6 people & 5 teams
      • 84 commits by 8 people in last month
    • Twitter Stats for Week:  #openstack 214 total tweets; OpenStack 1064 total tweets  (does not include RT)
    • Bugs Stats for Week: 385 Tracked Bugs; 71 New Bugs; 38 In-process Bugs; 0 Critical Bugs; 21 High Importance Bugs; 255 Bugs (Fix Committed)
    • Blueprints Stats for Week:  205 Blueprints; 13 Essential, 16 High, 20 Medium, 26 Low, 130 Undefined
    • OpenStack Website Stats for Week:  12,896 Visits, 35,054 Pageviews, 50.28% New Visits
      • Top 5 Pages: Home 39.26%; /projects 11.55%; /projects/compute 17.08%; /projects/storage 11.06%; /community 7.18%

    OPENSTACK IN THE NEWS

    Tags:

    OpenStack Glance Webinar

    Jay Pipes, Project Glance PTL and James Weir, CTO of UShareSoft.com are hosting “Future Direction and Discussion on Glance”; the webinar is scheduled for June 21, 2011 at Noon EST. To register for this webinar, please sign up at https://cc.readytalk.com/cc/schedule/display.do?udc=i5l6gkl36wsy.

    This webinar will provide an introduction to the project followed by an open community discussion on the roadmap for the Glance project.

    Tags:

    Welcome Summer to the OpenStack Community

    Hello OpenStack community! My name is Summer and I am the community manager intern this summer. I’m excited to be working with Stephen, the OpenStack team, and all of you on this dynamic and rapidly growing project. I am a graduate student in the Library and Information Science program at the University of Texas at Austin. My library science background and the fact I have never worked in the tech sector before makes this a novel experience for me. But while I’m scrambling to understand in the ins and outs of cloud computing and the unique open-source community several similarities have presented themselves.

    One of the things that attracted me to the field of library/ information science is the egalitarian and community-serving aspect of it. Connecting information or data to the people who need it is a basic tenet of the information professional. I love the idea that when I do not know the answer to a question (which is very, very often) someone out there does, and with a little searching that answer can be at my fingertips. Even this simple model of information sharing is analogous to the world of cloud computing. Rather than having an innate and static answer I am able to access the most recent knowledge from a variety of sources.

    Of course building a network of resources to connect the question and the answer is essential. Even the best librarian won’t be able to find the right answer if they are not connected to a variety of resources, journals, articles, and most importantly people. This is where the OpenStack community seems to be excelling and growing so rapidly. It’s amazing to see how many local communities are springing up all around the world and working to increase the shared knowledge in an exponential fashion.

    In the next couple months I will be working with Stephen to help make the various OpenStack communities more connected and up to date on recent developments through the various social networks and forums available. Stephen had me read Eric Raymond’s classic text “The Cathedral and the Bazaar” as an introduction to the open source ethos. This article reminded me of a similar paradigm shift in the library. Traditionally information professionals have been seen as a kind of gatekeeper (cathedral keeper?) of knowledge, now however they are recognizing the importance of opening those gates, letting in the bazaar, and participating in the conversation. So, hello! I’m excited to be part of this community and look forward to contributing to the OpenStack conversation.

    I can be reached at [email protected].

    Tags: