What You Need To Know About OpenStack
- by 7wData
OpenStack has evolved to a point where it is producing benefits for IT organizations and service providers, but also is surrounded by myths.
OpenStack is becoming a strategic choice for many organizations and service providers alike. Since its inception in 2010, OpenStack has experienced an impressive growth as it marches toward becoming the de facto standard for new cloud deployments.
Essentially, OpenStack is an open source-based cloud platform that entails the orchestration of compute, storage, and networking resources in a virtualized data center. It’s built upon commodity hardware, and managed through web dashboards and APIs. The OpenStack community is growing constantly. Today, more than 200 well-known vendors contribute to the code, including Cisco, Dell, HP, IBM, Intel, Oracle, Rackspace, Red Hat, and VMware.
451 Research Group estimates OpenStack’s ecosystem to grow nearly five-fold in revenue, from US$1.27 billion market size in 2015 to US$5.75 billion by 2020.
Why do organizations opt for OpenStack?
In a recent survey, 97% cited standardizing on a common open platform across multiple clouds among their top considerations. Avoiding vendor lock-in was another important factor, cited by 92%. Additional reasons include: OpenStack compatibility requirements from customers; cloud-native app deployment; vendor partnerships; research; data governance; DevOps-friendliness; and self-service and open source qualities.
In its early days OpenStack was primarily used for non-critical internal workloads such as test and development. Since then, companies are increasingly using it in a production environment, especially for cloud-native apps.
While various combinations of models exist, for simplification purposes, below is an extract of some of the most common ones.
On-Premises Distribution:On-premises is still the most frequently used deployment model. It can be implemented in a do-it-yourself (DIY) approach utilizing Homebrew or one of the vendor distros. In this scenario, the entire OpenStack environment is run on premises. Internal IT is usually in charge of deployment, configuration, patch and release management, as well as troubleshooting. With ample engineering resources, experienced in OpenStack, in place this deployment model can be cost effective. However, when resources are scarce or time-to-market matters a great deal, it might not be the model of choice.
Private Cloud in a Box:Some vendors offer appliances, which are usually run on-premise. They come with an embedded and vendor-supported OpenStack distribution, specifically tailored this particular setup. While these appliances require less engineering effort compared to an on-premise distribution, they tend to be relatively costly. Since they are built upon proprietary hardware and customized code, appliances also contradict the notion of avoiding a vendor lock-in, which is what most customers are actually striving for.
Hosted Private Cloud:Unlike a DIY approach, this model leverages a third-party data center of a service provider, who hosts a private cloud. In this case, the service provider owns the infrastructure and takes care of operating the environment, governed by a service level agreement (SLA). Customers can benefit from an existing IT landscape and the service provider’s OpenStack expertise, without having to make CAPEX investments or building up in-depth expertise themselves. On the flipside: The service provider has design authority, which leads to a vendor lock-in. Furthermore, a WAN connection is needed, and the existing customer environment gets fragmented and might be underutilized.
OpenStack-as-a-Service:Like a hosted private cloud, OpenStack-as-a-Service leverages a third party provider who takes care of everything.
[Social9_Share class=”s9-widget-wrapper”]
Upcoming Events
From Text to Value: Pairing Text Analytics and Generative AI
21 May 2024
5 PM CET – 6 PM CET
Read More