SIGNIFICANCE OF KUBERNETES TO IT WORLD
What is Kubernetes?
The name “Kubernetes” came from, it is a Greek word, meaning helmsman or pilot. The abbreviation K8s is derived by replacing the eight letters of “ubernete” with the digit 8.
To begin to understand the usefulness of Kubernetes, we have to first understand two concepts: immutable infrastructure and containers.
- Immutable infrastructure is a practice where servers, once deployed, are never modified. If something needs to be changed, you never do so directly on the server . Instead, you’ll build a new server from a base image, that have all your needed changes baked in. This way we can simply replace the old server with the new one without any additional modification.
- Containers offer a way to package code, runtime, system tools, system libraries, and configs altogether. This shipment is a lightweight, standalone executable. This way, your application will behave the same every time no matter where it runs (e.g, Ubuntu, Windows, etc.). Containerization is not a new concept, but it has gained immense popularity with the rise of microservices and Docker.
Armed with those concepts, we can now define Kubernetes as a container or micro-service platform that orchestrates computing, networking, and storage infrastructure workloads. Because it doesn’t limit the types of apps you can deploy (any language works), Kubernetes extends how we scale containerized applications so that we can enjoy all the benefits of a truly immutable infrastructure. The general rule of thumb for K8S: if your app fits in a container, Kubernetes will deploy it.
Kubernetes has become the standard for running containerised applications in the cloud, with the main Cloud Providers (AWS, Azure, GCE, IBM and Oracle) now offering managed Kubernetes services.
The Kubernetes Project was open-sourced by Google in 2014 after using it to run production workloads at scale for more than a decade. Kubernetes provides the ability to run dynamically scaling, containerised applications, and utilising an API for management. Kubernetes is a vendor-agnostic container management tool, minifying cloud computing costs whilst simplifying the running of resilient and scalable applications.
Features Of Kubernetes
Automated rollouts and rollbacks
Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will roll back the change for you. Take advantage of a growing ecosystem of deployment solutions.
Service discovery and load balancing
No need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can load-balance across them.
Routing of service traffic based upon cluster topology.
Automatically mount the storage system of your choice, whether from local storage, a public cloud provider such as GCP or AWS, or a network storage system such as NFS, iSCSI, Gluster, Ceph, Cinder, or Flocker.
Secret and configuration management
Deploy and update secrets and application configuration without rebuilding your image and without exposing secrets in your stack configuration.
Automatic bin packing
Automatically places containers based on their resource requirements and other constraints, while not sacrificing availability. Mix critical and best-effort workloads in order to drive up utilization and save even more resources.
In addition to services, Kubernetes can manage your batch and CI workloads, replacing containers that fail, if desired.
Allocation of IPv4 and IPv6 addresses to Pods and Services
How does Kubernetes work with Docker?
Actually, Kubernetes supports several base container engines, and Docker is just one of them. The two technologies work great together, since Docker containers are an efficient way to distribute packaged applications, and Kubernetes is designed to coordinate and schedule those applications.
Kubernetes’ increased adoption is showcased by a number of influential companies which have integrated the technology into their services
“At Bose we’re building an IoT platform that has enabled our physical products. If it weren’t for Kubernetes and the rest of the CNCF projects being free open source software with such a strong community, we would never have achieved scale, or even gotten to launch on schedule.” — JOSH WEST, LEAD CLOUD ENGINEER, BOSE
A household name in high-quality audio equipment, Bose has offered connected products for more than five years, and as that demand grew, the infrastructure had to change to support it. “We needed to provide a mechanism for developers to rapidly prototype and deploy services all the way to production pretty fast,” says Lead Cloud Engineer Josh West. In 2016, the company decided to start building a platform from scratch. The primary goal: “To be one to two steps ahead of the different product groups so that we are never scrambling to catch up with their scale,” says Cloud Architecture Manager Dylan O’Mahony.
“Everybody on the team thinks in terms of automation, leaning out the processes, getting things done as quickly as possible. When you step back and look at what it means for a 50-plus-year-old speaker company to have that sort of culture, it really is quite incredible, and I think the tools that we use and the foundation that we’ve built with them is a huge piece of that.” — DYLAN O’MAHONY, CLOUD ARCHITECTURE MANAGER, BOSE
From the beginning, the team knew it wanted a microservices architecture. After evaluating and prototyping a couple of orchestration solutions, the team decided to adopt Kubernetes for its scaled IoT Platform-as-a-Service running on AWS. The platform, which also incorporated Prometheus monitoring, launched in production in 2017, serving over 3 million connected products from the get-go. Bose has since adopted a number of other CNCF technologies, including Fluentd, CoreDNS, Jaeger, and OpenTracing.
With about 100 engineers onboarded, the platform is now enabling 30,000 non-production deployments across dozens of microservices per year. In 2018, there were 1250+ production deployments. Just one production cluster holds 1,800 namespaces and 340 worker nodes. “We had a brand new service taken from concept through coding and deployment all the way to production, including hardening, security testing and so forth, in less than two and a half weeks,” says O’Mahony.
2. THE NEW YORK TIMES
“I think once you get over the initial hump, things get a lot easier and actually a lot faster.” — DEEP KAPADIA, EXECUTIVE DIRECTOR, ENGINEERING AT THE NEW YORK TIMES
When the company decided a few years ago to move out of its data centers, its first deployments on the public cloud were smaller, less critical applications managed on virtual machines. “We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center,” says Deep Kapadia, Executive Director, Engineering at The New York Times. Kapadia was tapped to lead a Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.”
The team decided to use Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
Speed of delivery increased. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes,” says Engineering Manager Brian Balser. Adds Li: “Teams that used to deploy on weekly schedules or had to coordinate schedules with the infrastructure team now deploy their updates independently, and can do it daily when necessary.” Adopting Cloud Native Computing Foundation technologies allows for a more unified approach to deployment across the engineering staff, and portability for the company.
Founded in 1851 and known as the newspaper of record, The New York Times is a digital pioneer: Its first website launched in 1996, before Google even existed. After the company decided a few years ago to move out of its private data centers — including one located in the pricy real estate of Manhattan. It recently took another step into the future by going cloud native.
At first, the infrastructure team “managed the virtual machines in the Amazon cloud, and they deployed more critical applications in our data centers and the less critical ones on AWS as an experiment,” says Deep Kapadia, Executive Director, Engineering at The New York Times. “We started building more and more tools, and at some point we realized that we were doing a disservice by treating Amazon as another data center.”
To get the most out of the cloud, Kapadia was tapped to lead a new Delivery Engineering Team that would “design for the abstractions that cloud providers offer us.” In mid-2016, they began looking at the Google Cloud Platform and its Kubernetes-as-a-service offering, GKE.
At the time, says team member Tony Li, a Site Reliability Engineer, “We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?”
In early 2017, the first production application — the nytimes.com mobile homepage — began running on Kubernetes, serving just 1% of the traffic. Today, almost 100% of the nytimes.com site’s end-user facing applications run on GCP, with the majority on Kubernetes.
“We had some internal tooling that attempted to do what Kubernetes does for containers, but for VMs. We asked why are we building and maintaining these tools ourselves?”
The team found that the speed of delivery was immediately impacted. “Deploying Docker images versus spinning up VMs was quite a lot faster,” says Engineering Manager Brian Balser. Some of the legacy VM-based deployments took 45 minutes; with Kubernetes, that time was “just a few seconds to a couple of minutes.”
The plan is to get as much as possible, not just the website, running on Kubernetes, and beyond that, moving toward serverless deployments. For instance, The New York Times crossword app was built on Google App Engine, which has been the main platform for the company’s experimentation with serverless. “The hardest part was getting the engineers over the hurdle of how little they had to do,” Chief Technology Officer Nick Rockwell recently told The CTO Advisor. “Our experience has been very, very good. We have invested a lot of work into deploying apps on container services, and I’m really excited about experimenting with deploying those on App Engine Flex and AWS Fargate and seeing how that feels, because that’s a great migration path.”
There are some exceptions to the move to cloud native, of course. “We have the print publishing business as well,” says Kapadia. “A lot of that is definitely not going down the cloud-native path because they’re using vendor software and even special machinery that prints the physical paper. But even those teams are looking at things like App Engine and Kubernetes if they can.”
Kapadia acknowledges that there was a steep learning curve for some engineers, but “I think once you get over the initial hump, things get a lot easier and actually a lot faster.”