Performances of the Open-Source API Gateway: APISIX 3.0 and Kong 3.0
Zhengsong Tu
November 3, 2022
Background
Apache APISIX is a cloud-native, high-performance, scalable API gateway. It is implemented based on NGINX and etcd. In addition to features of traditional API gateways, APISIX has the features of dynamic routing and plugin hot-reloading, making it especially powerful for API management in cloud-native architecture.
In the fall of 2022, Apache APISIX and Kong released their 3.0 version almost at the same time. In particular, Apache APISIX 3.0’s new features focus on ecosystem, intelligence, and applications. You can check out Apache APISIX 3.0: 11 Highlights of Open Source API Gateway to learn more.
Both of them are excellent open-source API gateways for microservices. When two products are released simultaneously, many users are interested in their features and performance differences. In this article, we will provide the performance results of the tests on four different scenarios.
Testing Method
Request Topology
The following is the topology diagram of the test requests. The stress test tool used was wrk2, and the upstream service used was OpenResty.
APISIX
Kong
Server Information
This test was performed on a cloud server with Standard D8s v3 (8 vcpu, 32 GiB memory). All test-related components are deployed on this server.
Server Environment
Name | Value |
---|---|
OS | Debian 10 "buster" |
ulimit -n | 65535 |
Software Versions
The following are the versions of software used in this test:
Name | Version |
---|---|
Docker | 20.10.18, build b40c2f6 |
APISIX | 3.0.0 |
Kong | 3.0.0 |
Upstream | OpenResty 1.21.4.1 |
Test tool | wrk2 |
Network Setting
When deploying APISIX and Kong in docker, we used the host network mode in docker to avoid network implications that may affect the test results.
Deployment
We chose wrk2 as the benchmark testing tool and OpenResty as the simulated upstream. We deployed APISIX and Kong in docker with declarative configuration enabled for both.
We wanted to make the test results more intuitive, so we only used one worker for testing. Typically, the relationship between load capacities and the number of workers is linear. So, only one worker will be sufficient for testing.
Also, APISIX had turned off the proxy-cache
and proxy-mirror
plugins, which are mentioned in the benchmark-related documents in the APISIX project (the proxy-cache
and proxy-mirror
plugins will affect the performance of APISIX by about 4%).
Check out deployment script and test script reference here.
Tests
Test #1: 1 Route
Test the pure proxy scenario. We will only use one route and no plugins to test the performance of APISIX and Kong.
APISIX's configuration:
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
#END
Kong's configuration:
_format_version: "3.0"
_transform: true
services:
- name: hello
url: http://127.0.0.1:1980
routes:
- name: hello
paths:
- /hello
Performance Comparison
We used the QPS metric to measure the performance. A total of 10 rounds of testing were performed.
As we can see from the graph, in the pure proxy scenario, the performance of APISIX 3.0 is much higher than that of Kong 3.0. The average QPS of APISIX 3.0 in 10 rounds is 14104, and the average QPS of Kong 3.0 in 10 rounds is 9857. The performance of APISIX 3.0 is 140% of Kong 3.0.
Test #2: 1 Route + 1 Rate-limiting Plugin
Rate limiting is one of the primary user scenarios of API gateways. So, in this scenario, we configured the gateways with one route and one rate-limiting plugin.
APISIX's configuration:
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
plugins:
limit-count:
count: 999999999
time_window: 60
rejected_code: 503
key: remote_addr
#END
Kong's configuration:
_format_version: "3.0"
_transform: true
services:
- name: hello
url: http://127.0.0.1:1980
routes:
- name: hello
paths:
- /hello
plugins:
- name: rate-limiting
config:
minute: 999999999
limit_by: ip
policy: local
This test measures the performance of the API gateways in the rate-limiting scenario. We configured the rate-limiting plugin to a higher limit to avoid triggering a real rate-limiting action.
Performance Comparison
Again, we performed a total of 10 rounds of testing. We can see from the graph that after enabling the rate-limiting plugin, the QPS of APISIX 3.0 and Kong 3.0 both dropped significantly, but the QPS of Kong 3.0 dropped notably more. The average 10-round QPS of APISIX 3.0 is 9154, and the average 10-round QPS of Kong 3.0 is 4810. In this scenario, the performance of APISIX 3.0 is 190% of Kong 3.0.
Test #3: 1 Route + 1 Rate-limiting Plugin + 1 Authentication Plugin
Authentication is another common user scenario of the API gateway.
In this scenario, we configured the gateways with one route, one rate-limiting plugin, and one authentication plugin.
APISIX's configuration:
routes:
-
uri: /hello
upstream:
nodes:
"127.0.0.1:1980": 1
type: roundrobin
plugins:
key-auth:
limit-count:
count: 999999999
time_window: 60
rejected_code: 503
key: remote_addr
consumers:
- username: jack
plugins:
key-auth:
key: user-key
#END
Kong's configuration:
_format_version: "3.0"
_transform: true
services:
- name: hello
url: http://127.0.0.1:1980
routes:
- name: hello
paths:
- /hello
plugins:
- name: rate-limiting
config:
minute: 999999999
limit_by: ip
policy: local
- name: key-auth
config:
key_names:
- apikey
consumers:
- username: my-user
keyauth_credentials:
- key: my-key
This scenario covers rate limiting and authentication so that multiple plugins work together in the request path. It is a typical scenario that uses the API gateway.
Performance Comparison
Again, we did ten rounds of tests to measure QPS.
We can see from the graph that after APISIX enables the limit-count and key-auth plugins, the average QPS of APISIX 3.0 is 8933, which is only slightly lower than the average QPS of 9154 when only the limit-count plugin is enabled.
In Kong 3.0, however, the average QPS dropped to 3977, which is a significant drop compared to the average QPS of 4810 when only the rate-limiting plugin is enabled.
In this scenario of enabling rate-limiting and authentication plugins, the performance of APISIX 3.0 is 220% of Kong 3.0.
Test #4: 5000 Routes
This test uses scripts to generate 5000 unique routes. The test measures APISIX and Kong's performance for route matching: how quickly it hits a match.
Performance Comparison
In 10 rounds of testing, the average QPS APISIX 3.0 is 13787, and the average of Kong 3.0 is 9840. The performance of APISIX 3.0 is 140% of Kong 3.0.
Conclusion
From the results of testing multiple scenarios, it is evident that:
- the performance of APISIX 3.0 is about 140% of Kong 3.0 when plugins are not used (Test #1 and Test #4).
- The performance of APISIX 3.0 is about 200% of Kong 3.0 when plugins are used (Test #2 and Test #3)
We can see that APISIX maintains a considerable performance advantage in its 3.0 version.