As containers have become more important to businesses across the globe, it was necessary to create a system that would allow containers to scale out to meet the needs of enterprise-level deployments. That’s where Kubernetes comes into play.
Unlike Docker, Kubernetes is a very robust ecosystem. Instead of deploying a single container, Kubernetes enables you to deploy multiple containers to multiple hosts, making it ideal for larger deployments and load balancing.
This smart person’s guide is an easy way to get up to speed on Kubernetes. We’ll update this guide periodically when news about Kubernetes is released.
- What is Kubernetes? Kubernetes is an open source project that enables the management of large-scale container deployment.
- Why does Kubernetes matter? Containerized applications are one of the hottest technologies on the market today. If you’re looking to roll out large-scale, highly-available, load-balanced clusters of containers, Kubernetes might be the right tool for you.
- Who does Kubernetes affect? Kubernetes affects companies that want to roll out massive containerized applications, as well as clients, customers, consumers…anyone that would benefit from highly-available services. Kubernetes also helps developers build distributed applications and makes it much easier for IT operators to manage scalable infrastructure for applications.
- When is Kubernetes happening? Kubernetes was first announced in mid-2014 and was first released on July 21, 2015.
- How do I start using Kubernetes? Install the system on a supported platform such as Red Hat, SUSE, CentOS, Fedora Server, Ubuntu Server, etc., and you’ll have access to kubernetes, kubernetes-client, kubernetes-master, kubernetes-node, and more.
SEE: Special report: Riding the DevOps revolution (free PDF) (TechRepublic)
What is Kubernetes?
Kubernetes is an open source system that allows you to run docker and other containers across multiple hosts, effectively offering the co-location of containers, service discovery, and replication control. It was originally called K8s and was designed by Google and donated to the Cloud Native Computing Foundation.
The primary functions of Kubernetes are:
- schedule, start, manage, and scale containers across multiple hosts; and
- add a higher-level API to define how containers are logically grouped to define container pools and handle load balancing.
Its features include the following.
- Deploy containers and manage rollout control: With this complex system you can describe your containers and define how many you want in a single deployment. Kubernetes will not only manage the running of those containers (even across multiple hosts), but it will also handle deploying changes (e.g., updating images, changing variables, etc.) to your containers.
- Resource Bin packing allows you to declare minimum and maximum computer resources (CPU and memory) for all containers.
- Built-in service discovery: Automatic exposure of containers to the internet or other containers in the Kubernetes cluster.
- Autoscaling: Kubernetes automatically load balances traffic across matching containers.
- Heterogeneous clusters: Kubernetes allows you to build a cluster with a mixture of virtual machines, on-premises servers, or bare metal in your company data center.
- Persistent storage support is available to Kubernetes, with support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and more. Vendors, including Red Hat, Dell EMC, and NetApp, provide persistent storage for Kubernetes.
- High availability, such as multi-master and cluster federation, allowing the linking of clusters together for load balancing.
Kubernetes allows you to easily:
- deploy containerized applications quickly and predictably;
- scale containerized applications on the fly;
- seamlessly roll out new features to your containerized applications; and
- optimize your hardware specifically for your containerized applications.
Kubernetes, at its minimum, can schedule and run containerized applications on clusters of physical machines or virtual machines or a combination of physical and virtual machines; this allows developers to leave behind the traditional method of working with physical and virtual machines. Although this can be achieved with the simpler Docker Swarm, Kubernetes allows the deployment of much larger clusters, which can include docker containers. In other words, create your docker containers and then deploy them over a massive, load balanced cluster with Kubernetes.
Kubernetes consists of the following components.
- Pods: Containers are placed into pods to be managed by Kubernetes.
- Labels and selectors: Key-value pairs used to identify and group resources within Kubernetes.
- Controllers: A reconciliation loop that drives actual cluster state toward the desired cluster state.
- Services: A way to identify elements used by applications (name-resolution, caching, etc.).
- Kubernetes control plane: Main controlling unit of the cluster that manages workload and directs communication across the system.
- etcd: Persistent, lightweight, distributed key-value data store.
- API server: Serves the Kubernetes API using JSON over HTTP.
- Scheduler: Pluggable component that selects which node a pod should run on based on resource availability.
- Controller manager: The process that runs the Kubernetes controllers such as DaemonSet and Replication.
- Kublet: Responsible for the running state of each node (starting, stopping, and maintaining application containers).
- Kube-proxy: The implementation of a network proxy and load balancer that supports the service abstraction.
- cAdvisor: An agent that monitors and gathers resource usage.
Why does Kubernetes matter?
Containers are a powerful and flexible way to safely and reliably deploy applications and microservices to extend and expand your company’s services. When the need grows beyond a standard Docker deployment or you need to deploy/manage multiple containerized applications from different systems (such as Docker), you need a way to deploy and control such systems.
With the help of Docker Swarm, you can deploy containerized applications over a cluster, but you’re limited to Docker-only containers and to only using the architecture security, registry of Docker, Inc. With Kubernetes, those containers can come from a number of sources (Docker, Windows Server Containers, etc.), making Kubernetes incredibly flexible and significantly more complex.
Who does Kubernetes affect?
Kubernetes affects any company that needs to deploy massive rollouts of containerized applications and services; this means anyone involved with the deployment should be familiar with the Kubernetes tools and Docker. And considering Kubernetes is a rather complex system, administrators will need to do a fair amount of homework in order to successfully implement the technology.
The effect of Kubernetes goes well beyond those that administer the system—customers, clients, staff, and consumers…no one is immune to the effect of containerized applications. When we’re talking about business and enterprise-level deployments, Kubernetes takes center stage.
Developers are also affected by Kubernetes. As of April 7, 2017 Kubernetes had 1,137 contributors from across varying industries, with over 31 branches and 46,332 commits on GitHub. Kubernetes also has more developers working on it than Docker Swarm, Mesos, and Cloud Foundry Diego combined.
When is Kubernetes happening?
Kubernetes was created by Joe Beda, Brendan Burns, and Craig McLuckie, who were soon joined by other Google engineers, and was first announced by Google in mid-2014. The original name for Kubernetes was Seven Of Nine (from Star Trek Borg fame). Once the Google lawyers swayed the original developers away from the original name, they agreed upon the name Kubernetes.
Kubernetes v1.0 was released July 21, 2015 and very quickly wound up in the top 0.01% in stars and number 1 in terms of activity on GitHub. That translates to significant development on the project.
How do I start using Kubernetes?
Kubernetes can be deployed on numerous platforms, including:
For a full list of vendors/platforms supporting Kubernetes deployment, check out this spreadsheet.
You will need to set up:
- Kubernetes Master: This is where you direct API calls to services that control the activities of the pods, replications controllers, services, nodes, and other components of the cluster.
- Kubernetes Node(s): This system provides the run-time environments for the containers.
The Master and Node can be on the same system, but traditionally they will be separated.
You will also need your containers. The most widely used containers deployed by Kubernetes are from Docker, which makes sense considering docker containers are the most widely used on the planet. Kubernetes must be installed on the Master and all hosts. If you’re working with Red Hat, you can install Kubernetes with the command:
yum install docker kubernetes-client kubernetes-node etcd
Once Kubernetes is installed, follow these steps.
- Configure the Kubernetes service on the Master and each Node.
- Configure the kubelet and start the kubelet and proxy.
- Configure flannel to overlay the docker network in /etc/sysconfig/flanneld.
- Start the appropriate services on the node(s).
- Configure kubectl.
- Check to make sure the cluster can see the node.