Havana: An Enterprise IT Perspective

Intel IT has been working with OpenStack in our labs starting with Cactus, our first deployments into production were with Diablo, which we quickly moved forward to Essex and have been running production apps on this since the summer of 2012.  Since then we have been getting deeper and deeper into OpenStack, and recently had 8 of our Intel IT engineers approved to start contributing to the codebase, and have done our first contributions in the last few months.  May sound simple, but complex for a large enterprise shop like ours – and we are super jazzed to start contributing more and more…  looking forward to working as part of the community with more breadth and depth now thanks for having us 🙂

While we are in the midst of our Grizzly rollout, we are getting involved in the Havana specifics, and I wanted to share some of the key features we are excited about and share some of the use cases these help us with.

Our overall goal is to turn all of our data center into a software exposed environment, virtual machines, physical machines, networking, storage, and all of the components that make our complex and simple apps work at scale.  A number of the Havana features are really going to help make some big steps for us all into this level of capabilities:

At the foundation level, since we run traditional enterprise workloads along with new cloud-aware apps it is very important that resilient infrastructure solutions continue to improve, for instance live migration and restart on failure capabilities which are both supported by boot from volume block storage, and improvements in the orchestration code.  We also intend to use OpenStack to control physical nodes, therefore metal as a service work is increasingly important to us; our intent is to control our VMM sand physical nodes with one scheduler and one set of APIs/CLIs.

A lot of our work is happening at the higher levels of OpenStack, as we enable more complex applications and cloud-aware apps we need strong orchestration and automation for all of the resources OpenStack controls..  for instance we need auto-scaling, ability to deploy many machines in complex collections, controlling various networking asoects from IPs to firewalls all as part of the rollout of a single application.  Advances in HEAT to get us to a production level for our more advanced use cases are vital, as well as the improvements happening with LBaaS and OpenStack Networks.  We are also excited about the introduction of the service chaining concept which will let us declare our application flow and have OpenStack implement it from Internet all the way to the backend DBs.

As a large IT shop we also have to run our capacity like a supply chain so we need quotas and showback of resources for all of the services we expose to ensure the right usage behaviors while still allowing for rapid elasticity. Ceilometer across all resource types with granularity for users and projects get us to where we need for managing our capacity real time and with proper buffer capacity before new physical hardware lands, in turn allowing us to appear infinite in resources to our end users.

Speaking for the entire Intel IT Open Cloud team… the fast pace of the OpenStack community is why we decided to run this solution and contribute to it, we look forward to even more after Havana which will take our open cloud platform further out onto the cutting edge.  Don’t take a rest yet, we have a marathon we are on…. Hope to see you in November.

Leave a Reply

Your email address will not be published. Required fields are marked *