Will Slow Requests in API Gateway Affect Other Requests?
December 30, 2023
A frequently discussed concern in the realm of API gateways is the ability to efficiently handle a substantial number of concurrent requests. Specifically, the question arises: will slow requests significantly increase the response time of other normal requests in the API gateway?
The answer is that APISIX excels in this regard, demonstrating that the slow requests do not adversely impact other normal requests. However, for API gateway products based on different languages and software architectures, the performance may not be as favorable.
Various programming languages exhibit varying degrees of affinity towards concurrent software architectures. Early programming languages such as C and Fortran, designed primarily for single-processor systems, possess limited support for concurrency. However, with the advent of multiprocessor and multithreading environments, newer languages like Java and Python incorporate more robust concurrency and parallel processing capabilities. Languages like Go were even designed with concurrency in mind, tightly integrating their concurrency models with language features. The support for concurrency in a programming language not only reflects the technological environment during its inception but also anticipates its intended application scenarios.
Assuming thousands of concurrent requests, utilizing a multithreading or multiprocessing architecture (as seen in Java or Python) necessitates the allocation of thousands of threads or processes to manage request contexts. Those familiar with computer programming understand that, even if a majority of threads remain idle, the operating system incurs hardware resource consumption in maintaining thousands of threads or processes. However, using coroutines (as in APISIX and Golang), additional threads or processes are not required despite a surge in concurrent requests.
Coroutines, threads, and processes all represent approaches to multitasking but exhibit key differences:
-
Scheduling Mechanism: Thread/process scheduling is preemptive, and managed by the operating system (OS), meaning the OS decides when to interrupt and switch to another thread or process. Conversely, coroutine scheduling is cooperative, driven explicitly by programmers or language libraries. Coroutines need to relinquish control explicitly to facilitate switching to other coroutines.
-
Overhead: Threads/processes, being at the operating system level, incur higher resource requirements for creation, switching, and termination. Conversely, coroutines operate in user space, resulting in relatively lower overhead for creation, switching, and termination.
-
Data Sharing and Synchronization: Inter-thread/process data sharing necessitates complex synchronization operations, such as mutex locks, read-write locks, semaphores, etc., to prevent data race conditions. Coroutines, being within the same thread, can directly share global variables without the need for intricate synchronization.
In the world of APISIX, slow requests merely involve waiting for upstream responses, a process limited to listening for network events without incurring additional system resource overhead. In conclusion, APISIX does not compromise the response time of other normal requests due to the extended response time of certain requests.