Microservices 101

Photo by Kevin Ku on Pexels.com

Following on from last week’s cloud blog on AWS Lambda, this week I want to explore AWS Microservices and explain their purpose and benefits as concisely as possible. Microservices are an alternative to a traditional monolithic architecture. Before we examine Microservices it’s important to break down the principles of monolithic design…

As the name ‘mono’ suggests, this type of architecture is composed of an accumulation of tightly coupled components that are run as one individual process. This has several drawbacks. Firstly, a dysfunction or error in one component will compromise the whole application. Secondly, updates and improvements are difficult to implement due to the all encompassing design. Thirdly, as the whole application needs to be scaled in order to meet a spike in demand from one component, a monolithic system is not always an efficient option in terms of cost and compute power. 

So, what are microservices and how can they overcome these problems?

Microservices are individual stateless components that are run in Containers and communicate over separate APIs (Application Programming Interfaces). 

Each Container service is processed independently, allowing for errors to easily be isolated and resolved. Containers provide increased flexibility to easily make changes as well as high scalability. They are autonomous, containing everything the application needs in order to run a particular service such as code, libraries, virtual kernel and runtime.

A Docker is used to create containers for Linux or windows. It sits between the container and the operating system in order to facilitate a container service for windows workloads.

All in all, these functionalities make Microservices a highly resilient architectural solution for complex systems that are frequently updated and also require rapid delivery speed. However, that being said, every application has different requirements and demands so an analysis of the cost and benefits of migrating to Containers needs to be considered carefully before overhauling an application. 

Where can you run your containers?

Containers can be run in AWS Elastic Container Service or Elastic Kubernetes Service. The main difference between these services is that ECS is an AWS service which supports other AWS services only whereas Kubernetes is an open-source solution that can be used with services from other cloud providers. Both provide an efficient service for managing containers so really the preference comes down to whether you are set in AWS or like to float between multiple clouds. 

What is Fargate?

Fargate is where the phenomenon of serverless and the architectural autonomy of containers both collide. Fargate is a serverless compute service that works with ECS and EKS, removing the need to manually provision servers. It automatically allocates the right amount of compute power to your containers, ensuring that there is no excess expenditure on compute power that is exceeding demand at a given time. Significantly, Fargate optimises security by design as each ECS task or EKS pod is run in an individual Kernel with isolated CPU, memory, storage and resources that is not shared between pods. 

So that’s my version of Microservices 101. I hope I have given you a concise overview of Containers, Dockers, ECS, EKS and their use with Fargate. Keep an eye out for my next blog where I’ll be exploring a different aspect of technical architecture!