Blog Posts Process Management

Containers, Kubernetes, and Istio: Are You Keeping Up with the Latest in Integration?

Blog: The Tibco Blog

Reading Time: 5 minutes

The rapid evolution of containers over the past two decades has changed the dynamic of modern IT infrastructure, and it started even before Docker went mainstream in 2013—going all the way back to when server-level virtualization took the form of VMs. 

Flash forward to Docker: an open-source, easy-to-use GUI with the ability to package, provision, and run container technology. The adoption was impressive: within a month of its first test release, Docker was the playground of 10,000 developers. Unfortunately, Docker struggled with container management. In June 2014, Google introduced Kubernetes to solve a real problem engineers were having: managing the collective life cycles of their increasingly containerized workloads.

According to Docker, “A container is a unit of software that packages code and its dependencies, so the application runs quickly and reliably across computing environments.”

The success of containers and the high levels of adoption in production came with the success of the cloud. New upcoming architectural styles like microservices and DevOps practices could benefit from containers since they were the ideal technology for IT teams to develop, test, and deploy large-scale applications. 

With microservices, many applications benefit from being broken down into small, single-purpose services that communicate with each other through APIs so that each microservice can be updated or scaled independently. 

Why Kubernetes?

When Edison invented the light bulb, it needed to be hardwired into the lamp. He brilliantly solved this by inventing the Edison screw, which became a well-known standard that allows almost any bulb to be twisted into any light fixture.

Similarly, how do you get containerized services to work in concert as an application? Since one container may execute one or multiple services, how should engineers coordinate the many instances of containers and grant that all services are monitored and available? How do you scale services when a higher load is stressing the application? Typical scenarios are promotional sales like Black Friday or seasonal discounts that increase digital traffic and orders, which the applications must be able to process with the added load. 

It’s in this scenario where an orchestration system for containers is needed.

2022, the Year of Kubernetes

Kubernetes is an open-source container orchestration system used for managing, scaling, and automating computer application deployment. While Kubernetes, commonly referenced as K8s, is a relatively new technology, it has seen rapid adoption among IT organizations around the world. According to Statista, research has highlighted that 61 percent of worldwide respondents have already adopted Kubernetes, and 30 percent are planning to adopt it in the next 12 months; only 9 percent are unsure.

According to Google Trends, Kubernetes is at its highest popularity since 2014. Kubernetes is similar to the Edison screw; it enables containers to be used in large-scale applications across many enterprises. 

There are other container orchestration systems like Docker Swarm, but in the last few years, Kubernetes has seen the highest rate of adoption—becoming the preferred choice of many IT organizations. It’s also worth noting that it has a steep learning curve, and management of the Kubernetes master takes specialized knowledge.

What is Kubernetes? 

Commonly developers write applications that are packaged as containers. Once the container (or application) is deployed in production, if one container fails, another needs to take its place to grant business continuity. Kubernetes handles this changeover automatically and efficiently by restarting, replacing, and killing failed containers that don’t respond to a health check. It also monitors and decides where to launch containers based on the resources currently being consumed.

How Does Kubernetes Work?

To make it simple, a Kubernetes cluster is constituted of a master node, worker nodes, and pods. A master node, known as the Control Plane, has a collection of components to control, schedule, and communicate with the worker nodes—it takes care of the container’s lifecycle. 

Keep in mind that the master doesn’t execute any user application! The worker nodes, often referred to as the Data Plane, are constantly exchanging information with the master node to understand if there’s new work to do. Then we have pods—a pod works as a container wrapper and is hosted on the worker nodes. If a developer needs to scale an application, they add or remove pods. A best practice is having one container for each pod since Kubernetes manages pods rather than managing the containers directly.

A Kubernetes cluster has at least one master node and one worker node. In the below image, we have summarized the above concepts in a simplified schema.

A simplified Kubernetes Cluster example

Are containers and Kubernetes enough to create microservice-based applications? Yes, but there are some drawbacks. Kubernetes does not automatically try another pod if the forwarded pod is not serving properly. Each pod has a health check mechanism, and when a pod has health problems, it will just restart the pod instead of trying another one. 

In short, Kubernetes doesn’t fully solve all problems, such as:

Completing the Ecosystem with Istio

Organizations are always looking for the best tools available to not only scale up their applications and runtime environments but fully optimize how the traffic flows between microservices, reducing manual labor as much as possible.

Istio is a perfect addition to further modernize microservices-based apps and backends by securing, connecting, and monitoring the functions, containers, and other moving parts of the system. Istio improves on the Kubernetes container orchestration tool by injecting additional security, management, and monitoring containers into each pod.

The pros of adding Istio can be summarized:

Debugging: Istio shows how errors are related in a waterfall-type diagram. It speeds up the debugging process and repairs the problematic code.

Observability: To monitor what’s happening and see information like latency, time in service, errors per traffic, and other metrics for system health as a visual dashboard. Istio provides this functionality, whereas Kubernetes does not natively.

Balancing: Istio can balance the load across available resources and route traffic according to the fastest route—it’s like the mobile maps app Waze; it creates the best route.

Circuit Breaking: Istio avoids the system crashing when a service is overloaded or is down. It allows services to recover and deal with the pile-up of requests. 

Security: It implements authentication, authorization, and audit by default. 

According to Istio, “Istio brings standard, universal traffic management, telemetry, and security to complex deployments and helps organizations to run distributed, microservices-based apps anywhere.”

Should Your IT Department Implement Containerization?

Forrester predicts that in 2023 “organizations will accelerate investment in K8s as a distributed compute backbone for both current applications and new workloads that can be run more efficiently in K8s’ environments.” Additionally, “K8s will also propel application modernization with DevOps automation, low-code capabilities, and site reliability engineering.” Forrester recommends that cloud leaders should accelerate the shift to containers and Kubernetes.

There are many benefits to moving your application development to the cloud with containers and orchestration systems like Kubernetes with the addition of Istio. Traditionally, applications and the tooling that support them have been closely tied to the underlying infrastructure, so it was costly to use other deployment models despite their potential advantages. This meant that applications became dependent on a particular environment in several respects, including performance issues related to a specific network architecture. 

Kubernetes eliminates infrastructure lock-in by providing core capabilities for containers without imposing restrictions. It achieves this by combining features within the Kubernetes platform, including pods and services. Istio adds observability, security, and reliability to distributed applications. 

So what are you waiting for? It’s time to adopt these key powerhouses for container-based applications and manage them at scale.

Avoid Vendor Lock-in with TIBCO

TIBCO adopts a cloud-native approach with the freedom to select your cloud provider and orchestrate your application containers with Kubernetes in a rapidly evolving technology world. We help you remain flexible while avoiding the dreaded vendor lock-in. TIBCO Cloud Mesh is a service infrastructure that enables you to compose solutions across different TIBCO Cloud domains.

Want to learn more? Check out how TIBCO Integration solutions can fuel your container journey. And to learn more about containers and cloud-native architectures on TIBCO, connect with your peers on the TIBCO Community.

The post Containers, Kubernetes, and Istio: Are You Keeping Up with the Latest in Integration? first appeared on The TIBCO Blog.

Leave a Comment

Get the BPI Web Feed

Using the HTML code below, you can display this Business Process Incubator page content with the current filter and sorting inside your web site for FREE.

Copy/Paste this code in your website html code:

<iframe src="" frameborder="0" scrolling="auto" width="100%" height="700">

Customizing your BPI Web Feed

You can click on the Get the BPI Web Feed link on any of our page to create the best possible feed for your site. Here are a few tips to customize your BPI Web Feed.

Customizing the Content Filter
On any page, you can add filter criteria using the MORE FILTERS interface:

Customizing the Content Filter

Customizing the Content Sorting
Clicking on the sorting options will also change the way your BPI Web Feed will be ordered on your site:

Get the BPI Web Feed

Some integration examples