OpenStack 2016

When I went to my first OpenStack summit in Paris 2014, it was a huge thing. A vision of an open-source private cloud went out of the bottle and the hype exploded. But in these two years, in the market where I work, I haven’t seen too much adoption. So this year I went to the summit in Barcelona with low to moderate expectations. Luckily I’ve been wrong, because OpenStack is still a huge thing. And here below are just few reasons why.

First of all – numerous improvements, which make OpenStack more robust, easier to maintain and upgrade. Auto-remediation, for example will automatically add more hypervisors or evacuate VMs in case of HW failure, resolve rabbitmq problems, clean log files etc. With Newton release you will be able to upgrade the cloud without taking it down. Another interesting feature is that now you can create pools of external IP addresses, or create a compute node without an IP address for later addition.

Next, a complete set of new projects. For example Murano, which facilitates application deployment. Developers can package and publish their applications in a catalog, and deploy them with a push of a button. Or Sahara for automatic deployments of Hadoop clusters for big data analytics.

But the most important development is in the domain of containers. Application containers are very light-weight versions of operating systems that reduce resource overhead and facilitate application deployments and migrations. Here, we have three complementary options:

  1. Magnum Container-as-a-Service. Using Magnum APIs, you can create a “bay” on a bare metal machine for Kubernetes or Docker Swarm container management systems. Once this is done, you will use Kubernetes or Docker commands to create and manage nodes, and deploy containers. The advantage is obvious – containers are available and managed the same way in OpenStack cloud like VMs or bare metal servers.
  2. Kuryr – links the containers with the OpenStack networking layer (neutron) and
  3. Kolla – aims to containerize entire OpenStack for the OpenStack-on-OpenStack implementations (within its own container)
openstack-arch-kilo-logical-v1

OpenStack version Kilo (2015)

Apart from new features, level of adoption of OpenStack worldwide and accross different industries is much higher than earlier, and there are more serious use cases, not only in Telecom, but industry, government and financial services. Adoption is particularly high in China, with for example Unipay credit card company running their production system on OpenStack. An interesting fact – 40% of all contribution to OpenStack is actually coming from Chinese developers. In Switzerland, CERN is upgrading the capacity and still remains the largest OpenStack installation in the world.

There were some interesting case studies of running artificial intelligence (neural networks and deep learning) and HPC workloads on OpenStack. In case of GPU cards, the recommendation is to implement full GPU virtualization to be able to share a GPU accross VMs, and in case of TensorFlow workloads (deep learning framework), Sahara is a very viable option, but Magnum+Kubernetes is easier for implementation since Kubernetes has better support for Tensorflow (both originating from Google) and it automatically scales container nodes.

Conclusion – OpenStack is very relevant. It is a massive platform for innovation and global collaboration. It is the standard for hybrid cloud implementations. And it accelerates business. Recommendation for the companies: don’t take it too lightly, ask advice from those who have done it successfully, and take a bold step into the “OpenStack” future.

In the end, the organization in Barcelona has been very smooth, while it is not easy to accommodate and provide space, wifi, audio/ video infrastructure and food for >6’000 visitors.

14713635_1105537949524138_7718071817557439534_nindexindex

 

 

 

See more about the event on: https://www.openstack.org/summit/barcelona-2016/

Sasha Lazarevic, October 2016

This entry was posted in Digital and AI and tagged , . Bookmark the permalink.