Part 1: How to Build a Microservices API gateway using OpenResty
In this article, let's move on to the chapters on Openresty in action. I will use three chapters to introduce you to how to implement a microservice API gateway. In this process, we will not only involve the OpenResty knowledge we have learned before, but I will also show you how to build a new product and open-source project from scratch from multiple dimensions such as industry, product, and technology selection.
What does a Microservices API Gateway do?
Let's first look at the role of the microservice API gateway. The picture below is a brief description:
As we all know, the API gateway is not a new concept. It has existed for more than ten years ago. Its function is mainly to serve as the entrance of traffic and to process business-related requests in a unified way. Then the requests can be processed more safely, quickly, and accurately. It has the following traditional functions:
- Reverse proxy and load balancing, which are consistent with the positioning and functions of NGINX;
- Dynamic functions such as dynamic upstream, dynamic SSL certificate, and dynamic current and rate limiting at runtime, are functions that the open-source version of NGINX does not have;
- Active and passive health checks of upstreams, as well as service circuit breakers;
- Expand based on the API gateway to become a full-lifecycle API management platform.
In recent years, business-related traffic is no longer initiated only by PC clients and browsers, but more from mobile phones and IoT devices. With the popularization of 5G in the future, this traffic will increase. At the same time, as the structure of the microservice architecture changes, the traffic between services also begins to grow explosively. In this new business scenario, more and more advanced functions of the API gateway have naturally emerged:
- Cloud-native friendly, the architecture should become lightweight and easy to be containerized.
SkyWalkingand other statistical and monitoring components.
gRPCproxy, and protocol conversion between
gRPC, converting user's
HTTPrequest to internal service
- Assume the role of the OpenID Relying Party, connect with the services of identity authentication providers such as Auth0 and Okta, and treat traffic security as a top priority.
- Realize Serverless by dynamically executing user functions at runtime, making the edge nodes of the gateway more flexible.
- Lock no users and support hybrid cloud deployment architecture.
- The gateway node should be state-independent and can expand and shrink at will.
When a microservice API gateway has the above-mentioned functions, the user's service can just care about the business itself. Those functions that have nothing to do with business implementation can be solved at the independent gateway level. For example, service discovery, circuit breaker, authentication, rate limiting, statistics, performance analysis, etc.
From this point of view, API Gateway can replace all functions of NGINX and handle north-south traffic; it can also complete the role of
Istio control plane and
Envoy data plane to handle east-west traffic.
Why reinvent the wheel?
Because the status of the microservice API gateway is so important, it has always been a battleground, and the traditional IT giants have been in this field for a long time. According to the API Lifecycle Report released by Gartner in 2018, Google, CA, IBM, Red Hat, and Salesforce are all leading manufacturers, and Apache APISIX, which is more familiar to developers, is among the visionaries.
So, the question is, why do we have to reinvent a new wheel?
Simply put, this is because none of the current API gateways for microservices are adequate for our needs. Let's first look at closed-source commercial products. They have complete functions, covering the entire lifecycle management of API design, multilingual SDK, documentation, testing, and release, and provide SaaS services. Some are integrated with the public cloud, which is very convenient to use. But at the same time, they also bring two pain points.
The first pain point is the platform lock-in problem. The API gateway is the entrance of business traffic. Unlike the non-business traffic accelerated by CDNs such as pictures and videos, which can be migrated at will, the API gateway will bind a lot of business-related logic. Once you use a closed-source solution, it is difficult to migrate to other platforms smoothly and at a low cost.
The second pain point is the problem that it can't be developed again. Generally, large and medium-sized enterprises will have their own unique needs and need customized development, but at this time you can only rely on the manufacturer, and can't do secondary development.
This is one reason why open-source API gateway solutions have become popular. However, the existing open-source products are not omnipotent, and they also have many shortcomings:
- Rely on relational databases such as PostgreSQL and MySQL. This way, the gateway node can only poll the database when the configuration changes. This not only causes the configuration to take effect slowly but also adds complexity to the code, making it difficult to understand. At the same time, the database will also become a single point and performance bottleneck of the system, which can't guarantee overall high availability. If you use the API gateway in the Kubernetes environment, the relational database will be more cumbersome, which is not conducive to rapid scaling.
- Plugins can't be hot-loaded. When you add a new plugin or modify the code of an existing plugin, you must reload the service to take effect. This is the same as the need to reload after modifying the NGINX configuration, which will affect user requests.
- The code structure is complex and difficult to grasp. Some open-source projects have made multi-layer object-oriented encapsulation, and some simple logic has become blurred. But, for the scenario of API gateway, the straightforward expression will be clearer and more efficient, and it is also more conducive to secondary development.
Therefore, we need a lighter, cloud-native and development-friendly API gateway. Of course, we can’t build a car behind closed doors. We need to have a deep understanding of the characteristics of the existing API gateways. At this time, the panorama of the Cloud Native Software Foundation (CNCF) is a good reference:
API Gateway Core Components and Concepts
Of course, before implementing it, we need to understand the core components of the API Gateway. According to the functions of the API gateway we mentioned earlier, it needs at least the following components to start running.
The first is
Route. It matches the client's request by defining some rules, then loads and executes the corresponding plugin according to the matching result, and forwards the request to the specified upstream. These routing matching rules can be composed of
header, etc. The familiar location in NGINX is an implementation of routing.
The second is the plugin, the soul of the API gateway. Functions such as authentication, traffic and rate limiting, IP restriction, Prometheus, Zipkin, etc., are all implemented through plugins. Since it is a plugin, it needs to be plug-and-play; moreover, plugins cannot interact with each other. Just like we build Lego blocks, we need to use uniform rules and agreed development interfaces to interact with the bottom layer.
Next is the
schema. Since it is a gateway for processing APIs, it is necessary to verify the format of the API, such as data types, allowed field content, fields that must be uploaded, etc. At this time, a layer of
schema is required for unified and independent definition and inspection.
Lastly is storage. It is used to store various configurations of users and is responsible for pushing to all gateway nodes when there is a change. This is a very critical basic component at the bottom layer. Its selection determines how the upper layer plugins are written, whether the system can maintain high availability and scalability, etc., so we need to make a careful decision.
In addition, on top of these core components, we also need to abstract several common concepts of API gateways, which are common among different API gateways.
Let's talk about
Route first. A route will contain three parts: matching conditions, bound plugins and upstream, as shown in the following figure:
We can complete all configurations directly in
Route, which is the easiest. But in the case of many APIs and upstreams, doing so will cause a lot of duplicate configurations. At this time, we need the two concepts of
Upstream to make a layer of abstraction.
Let's look at
Service next. It is an abstraction of a certain type of API, and can also be understood as an abstraction of a group of
Routes. It usually has a
1:1 correspondence with upstream services, and the relationship between
Services is usually
N:1. I also used a picture to represent:
Through this layer of abstraction of
Service, we can separate the duplicate
Upstreams. In this way, when the
Plugin and the
Upstream change, we only need to modify the
Service instead of modifying the data bound on multiple
Finally, let’s talk about
Upstream. Continuing with the above example, if the upstreams in the two
Routes are the same, but the bound plugins are different, then we can abstract the upstreams separately, as shown in the following figure:
In this way, when the upstream node changes,
Route is completely unaware, and they are all processed inside
From the derivation process of these three main concepts, we can also see that these abstractions are based on practical using scenarios rather than imagination. They apply to all API gateways, regardless of specific technical solutions.
In this article, we introduced the role, functions, core components, and abstract concepts of the microservice API gateway, which are the basis of the API gateway.
Here is a question for you to think about: "Regarding the traditional north-south traffic and the east-west traffic between microservices, do you think the API gateway can handle both?" If you are already using an API gateway, you can also write down your thoughts on technology selection. Welcome to communicate and discuss, and you are welcome to share this article with your colleagues and friends to learn and progress together.