Red Hat OpenStack Platform 11
1.70

Problems that solves

Decentralized IT systems

Values

Reduce Costs

Improve Customer Service

Red Hat OpenStack Platform 11

Red Hat OpenStack Platform 11 is based on the upstream OpenStack release, Ocata, the 15th release of OpenStack. It brings a plethora of features, enhancements, bugfixes, documentation improvements and security updates. Red Hat OpenStack Platform 11 contains the additional usability, hardening and support that all Red Hat releases are known for. And with key enhancements to Red Hat OpenStack Platform’s deployment tool, Red Hat OpenStack Director, deploying and upgrading enterprise, production-ready private clouds has never been easier.

Description

Composable Upgrades By far, the most exciting addition brought by Red Hat OpenStack Platform 11 is the extension of composable roles to now include composable upgrades. Composable roles As a refresher, a composable role is a collection of services that are grouped together to deploy the Overcloud’s main components. There are five default roles (Controller, Compute, BlockStorage, ObjectStorage, and CephStorage) allowing most common architectural scenarios to be achieved out of the box. Each service in a composable role is defined by an individual Heat template following a standardised approach that ensures services implement a basic set of input parameters and output values. With this approach these service templates can be more easily moved around, or composed, into a custom role. This creates greater flexibility around service placement and management. Improvements for NFV Co-location of Ceph on Compute now supported in production (GA) Co-locating Ceph on Nova is done by placing the Ceph Object Storage Daemons (OSDs) directly on the compute nodes. Co-location lowers many cost and complexity barriers for workloads that have minimal and/or predictable storage I/O requirements by reducing the number of total nodes required for an OpenStack deployment. Hardware previously dedicated for storage-specific requirements can now be utilized by the compute footprint for increased scale. With version 11 co-located storage is also now fully supported for deployment by director as a composable role. Operators can more easily perform detailed and targeted deployments of co-located storage, including technologies such as SR-IOV, all from a custom role. The process is fully supported with comprehensive documentation and through a newly released reference architecture For Telcos, support for co-locating storage can be helpful for optimizing workloads and deployment architectures on a varied range of hardware and networking technologies within a single OpenStack deployment. VLAN-Aware VMs now supported in production (GA) A VLAN-aware VM, or more specifically, “Neutron Trunkports,” is how an OpenStack instance can support VLAN tagged frames across a single vNIC. This allows an operator to use fewer vNICs to access many separate networks, significantly reducing complexity by reducing the need for one vNIC for each network. Neutron does this by allowing subports off the original parent, effectively turning the main parent port into a virtual trunk. These subports can have their own segmentation id’s assigned directly to them allowing an operator to assign each port its own VLAN. Version bumps for key virtual networking technologies DPDK now version 16.11 DPDK 16.11 brings non-uniform memory access (NUMA) awareness to openvswitch-dpdk deployments. Virtual host devices comprise of multiple different types of memory which should all be allocated to the same physical node. 16.11 uses NUMA awareness to achieve this in some of the following ways: 16.11 removes the requirement for a single device-tracking node which often creates performance issues by splitting memory allocations when VMs are not on that node NUMA ID’s can now be dynamically derived and that information used by DPDK to correctly place all memory types on the same node DPDK now sends NUMA node information for a guest directly to Open vSwitch (OVS) allowing OVS to allocate memory more easily on the correct node 16.11 removes the requirement for poll mode driver (PMD) threads to be on cores of the same NUMA node. PMDs can now be on the same node as a device’s memory allocations Open vSwitch now version 2.6 OVS 2.6 lays the groundwork for future performance and virtual network requirements required for NFV deployments, specifically in the ovs-dpdk deployment space. Immediate benefits are gained by currency of features and initial, basic OVN support. See the upstream release notes for full details. CloudForms Integration Red Hat OpenStack Platform 11 remains tightly integrated with CloudForms. It has been fully tested and supports features such as: Tenant Mapping: finds and lists all OpenStack tenants as CloudForms tenants and they remain in synch. Create, update and delete of CloudForms tenants are reflected in OpenStack and vice-versa Multisite support where one OpenStack region is represented as one cloud provider in CloudForms Multiple domains support where one domain is represented as one cloud provider in CloudForms Cinder Volume Snapshot Management can be done at volume or instance level. A snapshot is a whole new volume and you can instantiate a new instance from it, all from Cloudforms With OSP 10 we introduced the concept of the Long Life release. Long Life releases allow customers who are happy with their current release and without any pressing need for specific feature updates to remain supported for up to five years. We have designated every 3rd release as Long Life. For instance, versions 10, 13, and 16 are Long Life, while versions 11, 12, 14 and 15 are sequential. Long Life releases allow for upgrades to subsequent Long Life releases (for example, 10 to 13 without stepping through 11 and 12). Long Life releases generally have an 18 month cadence (three upstream cycles) and do require additional hardware for the upgrade process. Also, while procedures and tooling will be provided for this type of upgrade, it is important to note that some outages will occur. Red Hat OpenStack Platform 11 is the first “sequential” release (i.e. N+1). It is supported for one year and is released immediately into a “Production Phase 2” release classification. All upgrades for this type of release must be done sequentially (i.e. N+1). Sequential releases feature tighter integration with upstream projects and allow customers to quickly test new features and to deploy using their own knowledge of continuous integration and agile principles. Upgrades are generally done without major workload interruption and customers typically have multiple datacenters and/or highly demanding performance requirements. For more details see Red Hat OpenStack Platform Lifecycle (detailed FAQ as pdf) and Red Hat OpenStack Platform Director Life Cycle. Additional notable new features of version 11 A new Ironic inspector plugin can process Link Layer Discovery Protocol (LLDP) packets received from network switches during deployment. This can significantly help deployers to understand the existing network topology during a deployment and reduces trial-and-error by helping to validate the actual physical network setup presented to a deployment. All data is collected automatically and stored in an accessible format in the Undercloud’s Swift install. There is now full support for collectd agents to be deployed to the Overcloud from director using composable roles. Performance monitoring is now easier to do as collectd joins the other fully supported OpsTools services for availability monitoring (sensu) and log management (fluentd) present starting with version 10. And please remember, this are agents, not the full server-side implementations. Check out how to implement the server components easily with Ansible by going to the CentOS OpsTools Special Interest Group for all the details. Additional features landing as Tech Preview Octavia brings a robust and mature LBaaS v2 API driver to OpenStack and will eventually replace the legacy HAProxy namespace driver currently found in Newton. It will become not only a load balancing driver but also the load balancing API hosting all the other drivers. Octavia is a now a top level project outside of Neutron; for more details see this excellent update talk from the recent OpenStack Summit in Boston. Octavia implements load balancing via a group of virtual machines (or containers or bare metal servers) controlled via a controller called “Amphora.” It manages, among other things, the images used for the balancing engine. In Ocata, Amphora introduces image support for Red Hat Enterprise Linux, Centos and Fedora. Amphora images (collectively known as amphorae) utilize HAProxy to implement load balancing. For full details of the design, consult the Component Design document. To allow Red Hat OpenStack Platform users to try out this new implementation in a non-production environment operators can deploy a Technology Preview with director starting with version 11. Octavia’s director-based implementation is currently scheduled for a z-stream release for Red Hat OpenStack Platform Version 11. This means that while it won’t be available on the day of the release it will be added to it shortly. However, please track the following bugzilla, as things may change at the last moment and affect this timing. OpenDaylight Red Hat OpenStack Platform 11 increases ODL support in version 10 by adding deployment of the OpenDaylight Boron SR2 release to director using a composable role. Ceph block storage replication The Cinder RADOS block driver (RBD) was updated to support RBD mirroring (promote/demote location) in order to allow customers to support essential concepts in disaster recovery by more easily managing and replicating their data using RBD-mirroring via the Cinder API. Cinder Service HA  Until now the cinder-volume service could run only in Active/Passive HA fashion. In version 11, the Cinder service received numerous internal fixes around locks, job distribution, cleanup, and data corruption protection to allow for an Active/Active implementation. Having a highly available Cinder implementation may be useful for uptime reliability and throughput requirements.

Scheme of work

 Scheme of work