How to select the best modernization approach on AWS – Part 2
Blog: Capgemini CTO Blog
In the previous blog in this series, I discussed the advantages of cloud-native app development, and Capgemini’s approach for app migration and modernization. In this blog, I will focus on providing an overview of all the different app deployment options available on AWS – Serverless, Containerized, and Hybrid. I also present to you a decision framework that will help you pick an option with all aspects considered.
Cloud-native apps on cloud provide opportunities for cost optimization. Instead of pre-provisioning compute or data capacity for a fixed target expected performance which costs capital to be locked up ahead of time, cloud native apps benefit from scaling up and scaling down on demand, with having to pay for services or infrastructure that has been in use. Tremendous cost optimizations could be had from this approach. In addition, there is reduced need for creating fixed size long-living test environments in cloud. Cloud native enables and benefits from ephemeral environments that can be created and destroyed with the push of a button. This has huge implications on cost.
Cloud Native on AWS
AWS, being one of the most mature public cloud platforms, offers the distinct advantage of providing companies with a lot of choices. Apps can be built and deployed as pure AWS Lambda services and/or as containerized microservices on popular orchestration frameworks such as Kubernetes. Additionally, it offers mechanisms to mix and match deployment models to optimize on cost, flexibility, maturity, and other constraints. We will provide a framework for selection for each of these options.
Serverless portfolio of AWS services offer multiple advantages such as improving agility, lowered total cost of ownership of applications, zero hardware to procure and maintain, no runtimes to manage. In addition, microservices deployed as AWS Lambdas offer the unique advantage of configuring flexible scaling at a microservice level which is typically not possible on monolithic applications. Organizations also benefit from higher productivity and better coding behavior due to simpler templating of code.
Serverless, despite these key advantages, have certain important drawbacks. AWS Lambda offers the ability to size the underlying process based on expected RAM usage. This means that users are unable to pick both CPU and RAM ratings for the compute, which could be limiting for certain types of workloads. Due the fact that the service needs to boot the run-time before being able to execute the serverless code, some amount of latency is to be expected. This latency is dependent on the run-time choice and auto-selected CPU type based on RAM requirement, among other parameters outside the control of the customer. It is important to make sure that Lambda can serve the latency needs of the application satisfactorily, prior to its selection.
Teams planning to adopt serverless must also be familiar with common anti-patterns such as cyclic Lambda calls, processes that run beyond allowed Lambda run duration limits, high latency external calls, IP range underestimations, etc. These are problems that with appropriate understanding, planning, design, and implementation can be fully circumvented.
It is relatively hard to estimate the run cost of serverless applications accurately. It is important to incorporate a bigger factor of safety to compensate for this difficulty. It is always recommended to do a comparative analysis of total cost of ownership of an application between a purely serverless model and an alternative, with a realistic view of the load capacities, before deciding to choose a model.
Microservices with Kubernetes (K8S) on EC2
K8s is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It is a popular platform for deploying microservices over Docker with a plethora of features for managing, scaling, and deploying large scale microservices applications. K8S can be deployed on EC2 in can be configured to leverage several AWS native networking and elasticity features seamlessly. This model offers maximum control to the user albeit at a higher degree of complexity, as compared its serverless or managed variations. A self-managed K8S deployment requires infrastructure to be sized, procured, hardened, managed, and monitored continuously. Optimizing instance and cluster sizing could be challenging as well. This requires skilled resources who are experienced at tracking platform versioning and ensuring that production and other environments stay abreast with latest security, performance, and quality updates.
Enterprises with enough experience on complex orchestration frameworks are better suited in leveraging K8S for large-scale microservices deployments. Other organizations are better off choosing semi-managed or fully managed container orchestration frameworks that offset the platform maintenance heavy lifting.
EKS with Fargate
One such semi-managed K8S platform service is EKS – AWS Elastic Kubernetes Service. This service runs the Kubernetes management infrastructure across multiple AWS Availability Zones, automatically detects and replaces unhealthy control plane nodes, and provides on-demand, zero downtime upgrades and patching. EKS is highly scalable and takes away undifferentiated heavy lifting from organizations so they can focus on business logic, and workload infrastructure. Adopting EKS does not preclude users from having to plan, procure, and manage the worker nodes as it only provides control plane as a managed service. AWS offers Fargate as a fully managed serverless compute option for running containers using which customers can circumvent all aspects of management of container workloads. This combination of EKS with Fargate provides the power of deploying containerized microservice applications on AWS without the complexity of managing the orchestration platform.
ECS with Fargate
ECS is a managed container orchestration service like EKS but is an AWS custom platform. For organizations looking to deploy containerized workloads on fully managed AWS services, there are two options – EKS with Fargate, ECS with Fargate. While EKS is fully compliant with Kubernetes specification, ECS is an AWS native platform that offers the ability to integrate with many of the AWS services seamlessly. For organizations that wish to deploy container workloads across different clouds, EKS may be a preferred choice as most public cloud vendors offer a variation of managed Kubernetes control plane as a service. For AWS oriented organizations, ECS offers advantages in its ability to easily integrate with several of AWS services. ECS could also be leveraged while running containerized workloads on EC2 instances without Fargate. This requires customers to procure, size, and manage the container workloads, while ECS provides the orchestration as a service.
Serverless is not suitable for all types of services. Once microservices are scoped out and their functional and nonfunctional requirements are identified, teams may mark them as a serverless Lambda implementation or a containerized microservice implementation. If latter, team may be able to choose from one of five available options – K8S on EC2, EKS with workloads on EC2, EKS with Fargate, ECS with workloads on EC2, ECS with Fargate. Hybrid deployments on AWS typically consist of Serverless along with one of the 5 options listed here. Provided below is a table that identifies the different criteria that are typically used to select from among these five options.
In summary, AWS platform provides a plethora of options for customers to deploy their microservices application on. A well-informed customer will be able to make the most appropriate selection of one or a combination of these options in order to create, deploy, and manage highly scalable, robust, cost-effective cloud-native applications on AWS.