Austin
April 25-29, 2016

Event Details

Please note: All times listed below are in Central Time Zone


Big Data Rapid Prototyping by Using Magnum with Cinder and Manila

Developers have unique challenges when trying to test against large datasets. Traditionally, the cost and time associated with copying and storing large amounts (multiple terabytes) of data has been prohibitive, leading to test and development being done with synthetic data at best. Within the DevOps model for building applications having full sets of real application data to test against is crucial at all stages of an application’s lifecycle. Imagine being able to run 1000’s of tests against replicas of entire multi-TB production datasets, enabling previously unattainable levels of reliability and allowing for an accelerated cadence of application development.  

We will discuss techniques for presenting multiple full datasets to developers within a containerized environment for both ephemeral and persistent use cases, leveraging Manila within an OpenStack environment.


What can I expect to learn?

How a large dataset (multi-TB) can be presented to a containerized application and rapidly replicated for many different use cases.

Thursday, April 28, 11:00am-11:40am (4:00pm - 4:40pm UTC)
Difficulty Level: Advanced
Technical Marketing Engineer
Andrew has worked in the information technology industry for over 10 years, with a rich history of database development, DevOps experience, and virtualization. He is currently focused on storage and virtualization automation, and driving simplicity into everyday workflows. FULL PROFILE