Microservice architecture is becoming more popular than ever, especially among agile and DevOps teams. You may have heard of a lot of buzzwords like “microservices”, “containers”, “Kubernetes” or many more. There is no doubt that microservices is full of complex theories and terminology. Here we are going to introduce a glossary of 11 most common terms and acronyms to help grow your dictionary and understand how they would benefit your team.
Microservices are an architectural pattern where various loosely coupled services work together to form an application. Each of these services focus on one single purpose only, encapsulating all related logic and data. Communications between services are conducted over well-defined APIs.
Some may ask – What is the difference between Microservices and Web Services?
While the latter always provides services over HTTP via the World Wide Web, Microservices are not restricted to that.
It can run on non HTTP, communicate through APIs, be served over file descriptors, or be developed over messages or email.
Application under a monolithic architecture
The application is built as a whole unit where functions and services are tightly coupled. When an error is thrown and not handled properly, the entire unit including services that are not related to the error will be brought down. Also, for any code changes or version update on even one service to take effect, a restart of the entire monolith (including all other services!) is required.
Application under a microservice architecture
The issues mentioned in the above section won’t be seen. Since the service components are decoupled and self-contained, errors thrown in one service will be handled right there. Similarly, to release some code changes on one service, restarting that specific service will suffice. Other services are left unaffected. As a result, deployment speed of the application as a whole has been improved.
The scalability of the application is improved too, where each can be scaled independently. Say the traffic on service A has increased drastically over the last 5 minutes and it is now struggling to deal with the increased amount of requests due to a lack of computational power. This issue can be tackled by simply scaling up the resources on service A. Once again, other services are left untouched.
- Service-level independent development and deployment cycle
- Shortened deployment time
- Different services can adopt different tech stacks
- Service-level fault isolations
- Improved modifiability
- Improved scalability
Based on the Manifesto for Agile Software Development and the 12 Principles behind it, agile development is an umbrella term for software development methodologies that promote disciplines and collaboration between cross-functional teams in the app development process with optimized
Agile development emphasizes iterative development where each team and each team member should be self-organized and accountable for the product quality. It promotes a set of software engineering best practices such as simple design, pair programming, continuous integration and test-driven development etc. The ultimate goal of agile development is to deliver a high quality product rapidly through frequent iteration and careful inspection. To achieve this, it is important to keep features small and incremental.
- End users are satisfied with a rapid-delivered product with high quality and reliability
- Client is satisfied when app vendors can respond to requests and iterate versions quickly
- Vendors are able to shorten the product delivery time
- Client and vendors can work closely together to ensure every new feature is aligned to business goal and develop rapidly
Server-less Cloud Computing
Server-less cloud computing (or “cloud functions hosting”) is a cloud service that allows building and deploying applications on cloud without involving infrastructure management. With server-less, you can avoid provisioning or managing servers, virtual machines or containers.
Traditionally, we have to carefully provision, manage and monitor the bare-metal servers no matter they locate on site or cloud. To provide all-day services, the servers are up and running 24/7 even if there is no incoming event. We also have to apply regular security updates and scale resources according to the real-time conditions. All these works consume your team a considerable amount of time and effort which can be invested elsewhere, e.g. deliver new features, refactor existing functions or try out some latest tech trends.
Here comes an alternative. Server-less Cloud Computing allows users to build and run applications without having to think about servers. You don’t have to mess with the server’s infrastructure. All you have to do is upload the application code and let the server provider takes care of the rest.
The term “server-less” can be misleading though – an actual server is still up and running somewhere, you just don’t have to care about it.
Using server-less clouding computing services might help you save some money as well.
You pay for the actual amount of computational resources consumed, unlike renting a dedicated server where you get charged a fixed amount regularly. In other words, you pay only when your code runs.
Sometimes people mix up the terms server-less and PaaS (Platform-as-a-Service). In fact, PaaS runs anything you deployed onto the platform, whether it is just a function or huge services. Though both server-less and PaaS support horizontal scaling, server-less applications scale instantly, automatically, and on-demand while PaaS needs manual provision.
- Time and effort used on server set-up and maintenance can be allocated elsewhere
- On-demanding billing
- Increased efficiency for deployment with lower cost
Corporations which adopt Server-less
- Coca Cola
- AWS Lambda
- IBM Cloud Functions
- Microsoft Azure Functions
- Oursky Skygear Cloud Functions
The word DevOps is combination of Development and Operation. In traditional development lifecycle, a.k.a the Waterfall model, a product is released after going through the following phases: Design -> Development -> Test -> Deployment. What if some parts of the design are changed at any of the phases listed above? Do we need to go through every phases again?
To tackle these issues, here we have the Agile approach. The above phases are repeated and iterated through swiftly and frequently. DevOps practices then come in, adopting CI/CD (Continuous Integration & Continuous Delivery) approaches through
Like physics, when things move fast enough it behaves differently. That is something DevOps is trying to address. By practicing the concept of CI/CD, developers do not need to always care about which or what servers the code is being deployed to. Developer only needs to push code to the source control server. Code uploaded is run through automated tests then built, no matter it is an .apk, .ipa, index.js, .jar, a docker image or something else.
By practicing IaC (Infrastructure as Code), every instance behaves the same as many human errors as possible. The whole system is documented and reproducible. Discussions and reviews can now be conducted on code, instead of having to dive in various servers and type in some long procedural commands. Code changes now apply to all server instances, making them more manageable.
What DevOps really is?
DevOps can be a
DevOps breaks down the barriers of Development and Operation, accelerating the development cycle by applying the practices of CI/CD, IaC, and tools like Serverless Cloud Computing, Containerand Kubernetes. DevOps is basically involved in every part of the development lifecycle: Testing, Deployment, Monitoring and Scaling (Maintenance).
- Accelerate development and deployment speed
- Enhance product reliability and quality by emphasizing shared responsibilities
- Enhance security by applying automated security protocols and best practices
- Improve scalability by automating and standardizing the process
Continuous Integration / Continuous Delivery (CI/CD)
Continuous Integration (CI) means continuously integrating all works within a team to one shared mainline (e.g. a development branch on Git) at a high frequency, to reduce integration overhead. This is usually done through some automated building and deploying processes.
Continuous Delivery (CD) is an approach where teams produce and release software in short cycles, hence testing and monitoring can be carried out at a more frequent manner. CD aims to get the software ready and stable enough for release at any time.
CI/CD are two of the most popularly adopted philosophies by DevOps teams, targeting to build an automated pipeline to deliver product to end users at a fast pace.
Here are some generally good CI/CD practices:
- Each developer frequently commits the changes made to his/her working branch
- Regular merging to and rebasing from the mainline
- Frequent build and release cycle on the software
- Ensure each team member gets access to the latest build
Key Benefits of CI
- Avoid debugging at final stage, allow bugs to be found at earlier stages
- The most recent codes are available to all members all the time
- Easier to locate issues across different versions as releases are done frequently
- Reduce version control overhead due to frequent code check-ins
Key Benefits of CD
- Improve product quality and reliability by continuous testing before formal releases
- Accelerate delivery time to the market because of fast building
- Build the right product by getting feedbacks and testings back quickly
Continuous deployment is an approach that shortens the time lapse between writing codes and releasing into production environments. Through automation, it ensures a reliable and validated application / service to be deployed for production at any time, in a frequent and sustainable way.
Continuous deployment can be regarded as the following step of continuous delivery. While continuous delivery ensures developers’ changes are tested to be deployable at any time which is fundamental in DevOps philosophy, it is optional to adopt continuous deployment which automates the changes to be deployed to production.
- Get feedback quickly from users by enabling frequent release of updates
- Improve product quality and build the right product for the market
- Earlier return for investment on the product
Infrastructure as Code (IaC)
Infrastructure as Code (IaC) makes the foundation of DevOps methodology. It is how you manage and provision the configurations of infrastructure such as servers, virtual machines or load balancers in a codified and automated way. Each configuration file will generate an EXACT same deployment environment. In this way, it can prevent inconsistency and reduce manual work (by no need setting up on a portal). It also assists in scaling and automation.
IaC is not just about codes or automation. It requires modelling the infrastructure with machine-readable definition files (e.g. JSON / XML / YAML etc.)properly with proven software best practices. Undertaken the best practices, we can ensure the server (infrastructure) could be redeployed to multiple servers without errors, i.e. a single person can configure multiple machines properly with just one click.
- Accelerate the production speed with early testing and faster development
- Reduce potential risks with errors or security vulnerability
- Prevent inconsistency of deployment environments
- Cost efficient with less manual work involved
- Better scalability
Infrastructure as a Services (IaaS)
Infrastructure as a Services (IaaS) is one of the cloud computing services that provides computing resources for the use of infrastructure such as cloud hosting and virtualized hardware over the internet.
IaaS often charges based on the demand, like how much computing power you consume from virtual machines. Since it is hosted by third party service providers, you are able to scale up and down quickly with pay-per-use basis. IaaS could be used at production grade, or it could be very effective especially when your infrastructure demand is temporary or experimental, for instance, developing a new product which doesn’t have a significant workload yet.
- Easy-breezy management of infrastructure and just enjoy the service and support
- Being cost efficient that you do not need to set up your own data center
- Start development at any time without spending time to set up infrastructure
- Scalability with unknown demand
- Focus on what users truly feel matters (e.g. business values, UI/UX etc.) without provisioning effort to maintain infrastructure
- Enjoy better stability and reliability with best practices of infrastructure in place
- Better security and disaster recover strategy provided by infrastructure expertise
- AWS EC2
- Google Cloud Platform
- Microsoft Azure
People nowadays use the term Container or Docker interchangeably. But a container actually refers to the capabilities to run programs on various runtime environment as
Container abstracts the OS, which only packs the user space (referring to the executables and files of our app), so it is considered lightweight comparing to Virtual Machine (VM). VM on the other hand, abstracts the communications among hardwares, packing the code and the guest OS in the image that run on top of a hypervisor or host OS.
Docker is a higher level abstraction over LXC/ libc driver. Docker is commonly regarded as the industry standard of Container. Most developers are coding and deploying containers as Docker. Docker provides a standard that defines containers, delivering myriad of benefits like portability, easy versoning, sharing of images, which is a good practice of IaC (Infrastructure as Code).
- Can be deployed on different service providers
- Lightweight design with more flexible resource allocation
- Able to run anywhere because of standardization and consistency
- Better scalability
- Google Kubernetes Engine (GKE)
- AWS Elastic Container Service (ECS)
- Amazon Elastic Container Service for Kubernetes (EKS)
- Docker Swarm
- Apache Mesos
By official definition, Kubernetes (short form as “k8s”) is “an open-source system for automating deployment, scaling, and management of containerized applications”.
Imagine your product becomes a worldwide hit overnight, massive amount of users are rushing in. While the traffic has grown drastically over a short span of time which is a good thing, you’ll have a few problems to solve. How can you keep the system healthy meanwhile scaling the resources to deal with the significantly increased traffic? How do you release features gradually to test users’ acceptable towards them?
A costly solution to handle the grown amount of traffic would be renting a lot more servers, paying for more tools and expanding your DevOps (only just operation) team to closely monitor your services. But you just might not have the big bucks (yet) to support doing this, plus is this approach really manageable and scalable?
Fear not, we have tools like Kubernetes (k8s). Kubernetes orchestrates containers in deployments, scaling resources automatically. You can also set up and implement approaches like High Availability (HA) architecture, resources planning, and many more plugins like alerting CICD on k8s. The benefits of adopting containerization like portability, independent of service providers, productivity
In summary, k8s provides a whole ecosystem to operate on. Now we can handle the growth of traffic by automatically scaling the k8s cluster workers and replicas of container (podsin k8s term). Rolling update can be achieved with zero downtime as well. It also allows utilization of resources in a fashion of over-provisioning for efficient use or guaranteed Quality of Service.
- Improve productivity for development and deployment
- Enable integration of CI/CD with zero downtime
- Independent of service/cloud providers (pods or containers)
- Resources utilization efficiency
- Enable observability
- Highly Availability cluster running on multiple zones
To illustrate the mentioned terms in real-life example, imagine you are now going through a mobile app development process. To optimize the development process, you can:
- Adopt agile development to introduce a corporate culture of disciplines and encourage collaboration between cross-functional teams.
- Adopt DevOps and apply best practices such as CI/CD, continuous improvement and continuous deployment to enable a rapid and reliable deployment.
- Opt in VM / server-less / microservices (e.g. Skygear) in the architecture to minimize maintenance costs of physical infrastructure.
- For the main API server, you may want to apply containers (e.g. Kubernetes) to standardize the environment from development to production.
Of course, a glossary of 11 popular terms is not enough to understand the full picture of the microservices ecosystem, but we hope these 11 terms and acronyms have helped you get a basic understanding and idea of what the ongoing trend for microservices, containers technology or server-less cloud computing is about. We will keep updating the glossary, stay tuned by subscribing us!