понедельник, 5 марта 2018 г.

[prog.c++] Async processing of incoming and outgoing HTTP-requests with RESTinio and libcurl

We started development of RESTinio because sometimes we had to implement REST-like interfaces for legacy systems with one common feature: long response times. We can make a request to a distant system and response can arrive after a dozen of seconds, sometimes after a several dozen of seconds. It is not appropriate to block a work thread that handles incoming HTTP-request for such a long time. That’s why we needed an embeddable C++ HTTP-server which supports async request processing. After experimenting with some existing solutions we decided to build our own solution with very simple goals: it should be user-friendly and very easy to use, but should provide reasonably good performance and be cross-platform. So RESTinio was born.

Time shows that a situation when someone needs to deal with long-responding distant hosts is not unique. Recently we received an interesting question about integration of async processing of incoming HTTP-requests via RESTinio with async processing of outgoing HTTP-requests via libcurl.

Answering this question we prepared a demo that includes some simple C++ applications showing how async processing of incoming and outgoing HTTP-request can be done. Source code for the demo can be found here. This post briefly describes the major aspects of using RESTinio and curl_multi interface from libcurl.

What The Demo Does?

Demo includes several applications. One of them, delay_server, is a simulation of long-responding distant system. It accepts an HTTP-request, wait for some random time and then answers to the request. Thus emulating a long-responding external system.

Other applications, bridge_server_*, are simulation of "front"-systems. Each bridge_server_* accepts a HTTP-request, performs a new outgoing HTTP-request to delay_server, waits for response from delay_server and answers to the accepted HTTP-request.

The tricky part is async processing of all requests. It means that delay_server and bridge_servers can handle thousands of requests in parallel without blocking work threads.

There are three implementations of bride_servers. Two of them, bride_server_1 and bridge_server_1_pipe use curl_multi_perform and curl_multi_wait functions. They demonstrate the simplest form of curl_multi usage. The last, bridge_server_2 uses curl_multi_socket_action and shows the most complex way of using curl_multi.

To try the demo run delay_server and then one of bridge_servers. Then issue a HTTP-request by curl/wget utility and see what happens. To issue a lot of parallel requests utilities like ab or wrk can be used (we used ab for testing).

A Few Words About delay_server

delay_server is implemented as a simple single-threaded C++ application. It source code can be found here (https://bitbucket.org/sobjectizerteam/async_restinio_async_libcurl_en/src/tip/dev/delay_server/main.cpp).

It accepts only HTTP GET requests which are going to URLs like /YYYY/MM/DD, where YYYY, MM and DD are digital sequences. delay_server uses Express router to filter incoming HTTP and handle only requests for appropriate URLs.

To make a pause in request processing delay_server uses Asio-timers. When a new HTTP-request accepted the delay_server creates and fires a new Asio-timer. When this timer expires the delay_server produces a reply to the accepted request.

Two Ways Of Using curl_multi

Before we go into discussion of bride_servers' implementations it is necessary to give a brief explanation of two ways of curl_multi usage.

The first and may be the simplest one is the usage of curl_multi_perform function (may be in conjunction with curl_multi_wait). Just create curl_multi instance, then fill it with curl_easy instances, then call curl_multi_perform to force libcurl to do actual IO-operations, then control completeness of HTTP-requests by calling curl_multi_info_read.

The main trick here is how to detect a time when curl_multi_perform should be called. One approach is to control a readiness of IO-operations via select() call and curl_multi_fdset. But there is a newest approach with curl_multi_wait. In this approach libcurl controls readiness of IO-operations by itself using efficient underlying system API (like epoll on Linux).

The second and the harder way is to use curl_multi_socket_action function. It also requires to create a curl_multi instance and to fill it with curl_easy instances. But it is your responsibility to control the readiness of underlying sockets for read/write operations. You have to maintain some event-loop by yourself. Having some event-loop you wait while sockets become ready for read and/or write and calls curl_multi_socket_action for appropriate sockets. You also control completeness of HTTP-requests by calling curl_multi_info_read.

Some Disclaimers

We won't do a detailed description for every application in our demo. It just takes to much time. We will provide some descriptions of what an application do and why it do it that way. Please refer the source code or RESTinio docs or libcurl docs. And fill free to ask questions in comments.

Please note that code we show is no way a production ready. There is no any error checking nor error handling. This code is written just for demo purposes.

There are also some important notices about usage of libcurl:

  • we didn't have experience with curl_multi (but have experience with curl_easy in the past). We studied how to use curl_multi during preparation of this demo. Because of that there can be more simple and efficient ways to achieve the same results;
  • we use "naked" libcurl plain-C API without any external C++ wrappers. One reason to do that is a limit of time: we just have no enough time to look around and study some C++ wrappers and look under the hood to understand how it can be used. Another reason is our desire to have a full control on libcurl. If you already have a useful C++ wrapper around curl_multi your experience about working with libcurl may be different.

An Explanation Of bridge_server_1

The simplest application that uses curl_multi is bridge_server_1. There are two work threads in it. The main thread services RESTinio's HTTP-server. Additional separate thread performs outgoing HTTP-request to delay_server (we will call this thread as curl-thread).

When the main thread receives an incoming HTTP-request it wraps it into an instance of request_info_t (see definition here) object and passes this object to curl-thread via simple and custom-made thread-safe queue (see implementation here).

Curl-thread periodically checks this queue and extracts all new requests. Every new request is transformed into a new curl_easy instance and is added to the single curl_multi instance. Curl-thread then calls curl_multi_peform function if there is any request in progress and checks completion of any request. It is done by this code:

// If there is active operations or we have extracted some new
// operations then curl_multi_perform() must be called.
if(0 != still_running ||
      request_info_queue_t::status_t::extracted == status) {
   curl_multi_perform(curlm, &still_running);
   // If there are completed operations they should be finished.

A calls to curl_multi_info_read are performed in check_curl_op_completion helper function.

After invocation of curl_multi_perform and curl_multi_info_read we should decide when we have to call them next time. If there are some requests in progress then we should call curl_multi_perform as soon as some of underlying sockets become available for appropriate IO-operation. We call curl_multi_wait for that purpose, but limit waiting time to 50ms. It is necessary to periodically check presence of new requests.

It there are no requests in progress then we simple sleep for 50ms and then check for new requests.

All waiting actions are performed in the following lines:

// We should call curl_multi_wait() if there are active operations.
if(0 != still_running) {
   curl_multi_wait(curlm, nullptr050 /*ms*/nullptr);
else {
   // There is no any active operations. We will sleep for some time.

When check_curl_op_completion finds a completed outgoing request it initiates a response to the corresponding incoming request (it is done in complete_request_processing function). There is an important moment: an incoming HTTP-request is accepted on the context of the main thread, but the response for it is created on the context of separate curl-thread. And it is perfectly fine because RESTinio delegates all IO-operations related to HTTP-response to the work thread on which RESTinio is running (e.g. to the main thread in our example).

Obvious Drawbacks Of bridge_server_1 Implementation

There are two obvious drawbacks of this simple implementation of bridge_server_1:

  1. If there are no new requests the curl-thread still wakes up 20 times per seconds. It can be a problem for resource constrained and/or mobile device.
  2. If there is no current work then curl-thread checks for new request every 50ms. It means that latency for request processing will be increased (for 50ms in the worst case).

In some cases these drawbacks can be inappropriate. Our bridge_server_1_pipe shows a simple solution.

But in some cases, like heavy loaded server-side application with response times greater than a dozen of seconds these drawbacks will be negligible.

A Few Words About bridge_server_1_pipe

Another example, bridge_server_1_pipe, shows an easy way of removing drawbacks of bridge_server_1 by using additional Unix-pipe for notifications from the main thread to curl-thread.

The bridge_server_1_pipe uses the ability to pass additional handles to curl_multi_wait function. And curl_multi_wait returns not only when some of curl's own sockets become ready for IO-operations, but also when user-supplied handles become ready. It allows to create a Unix-pipe and pass its read-end handle to curl_multi_wait function. When there is a data to read from this pipe then curl_multi_wait returns and we can detect the presence of data in the pipe. It looks like this:

int still_running{0};

while(true) {
   curl_waitfd notify_fd;
   notify_fd.fd = queue.read_pipefd();
   notify_fd.events = CURL_WAIT_POLLIN;
   notify_fd.revents = 0;

   // Wait for IO-events.
   int numfds{0};
   curl_multi_wait(curlm, &notify_fd, 15000, &numfds);

   if(numfds && 0 != notify_fd.revents) {
      // There is a data in notification queue.
      // New items should be extracted.
      auto status = try_extract_new_requests(queue, curlm);
      if(request_info_queue_t::status_t::closed == status)
         // Our work should be finished.

   // If there are active operations or extracted items then
   // curl_multi_perform must be called.
   if(still_running || numfds) {
      curl_multi_perform(curlm, &still_running);
      // If there are completed operations they should be finished.

The important change should also be made to the main thread: if a new request is stored into the empty queue then some data should be written into write-end of the notification pipe. This is done in the modified version of thread-safe queue.

An Explanation Of bridge_server_2

The bridge_server_2 uses different working scheme than bridge_server_1*. It uses the same pool of work threads to serve incoming and outgoing HTTP-requests. It means that RESTinio and libcurl shares the common working context -- a thread pool created inside restinio::run call (it can be seen in run_server function). RESTinio and libcurl also shares the common event-loop which are hidden inside Asio's io_context object.

Some Words About curl_multi_socket_action And Related Things

When we decided to use curl_multi_socket_action we should do at least two things for libcurl:

  1. Serve timers for libcurl. A special callback should be installed to curl_multi instance as CURLMOPT_TIMERFUNCTION. libcurl will call this callback when it wants to control time of some action. When a timer expires we should call curl_multi_socket_action with a special value CURL_SOCKET_TIMEOUT.
  2. Serve event-loop for detect readiness of underlying sockets. When we detect that a socket is ready for read we should call curl_multi_socket_action for this socket with value CURL_POLL_IN (or CURL_CSELECT_IN). When we detect that a socket is ready for write we should call curl_multi_socket_action for this socket with value CURL_POLL_OUT (or CURL_CSELECT_OUT).

There is no problem with serving timers for libcurl. But the second thing, related to sockets, is a tricky one.

The main question: how do we know which sockets should be controlled by our event-loop?

The answer: libcurl tell us about it by calling a special callback which should be installed as CURLMOPT_SOCKETFUNCTION. libcurl calls this callback from time to time and pass two important items into it (among other parameters):

  • a handle of socket to be controlled;
  • a operation type which should be controlled for that socket. For example, if libcurl wants to detect a readiness for read it will call this callback with CURL_POLL_IN or CURL_POLL_INOUT value.


In the bridge_server_2 example we have a problem: if an underlying socket is created by libcurl then how can we use it with Asio's event loop?

We solve this problem the way shown in the standard libcurl example asiohiper.cpp. The solution is: we provide sockets for libcurl.

To do that it is necessary to define two options for every new curl_easy instance:

We provide this callbacks in our example. When libcurl wants a new sockets we create a new instance of asio::ip::tcp::socket and return its handle from OPENSOCKETFUNCTION callback. We also store this instance in a special dictionary. When libcurl calls CLOSESOCKETFUNCTION we search a socket in this dictionary and remove the socket.

Sockets Can Live Longer Than curl_easy Instances

It is necessary to mention yet another feature of libcurl, curl_easy and curl_multi: we specify OPENSOCKETFUNCTION and CLOSESOCKETFUNCTION for curl_easy instance. But a socket which will be created by OPENSOCKETFUNCTION callback can outlive the related curl_easy instance.

It is because libcurl holds an internal socket pool. If there is no available socket in the pool to serve a HTTP-requests then libcurl creates a new socket. But if there is a free socket connected to the appropriate target host then libcurl will reuse it. And if you working with a predefined set of external systems normally it is always the case.

When libcurl finishes processing of a HTTP-request we destroy curl_easy instance, but libcurl doesn't destroy the underlying socket. This socket is returned to the pool and can be reused later to another request to the same target host.

Because of this we handle and hold sockets separately from curl_easy instances.

How Does bridge_server_2 Work?

There is an instance of curl_multi_performer_t class in the program. This instance holds a curl_multi instance, a timer for serving libcurl's timeout, a dictionary of created sockets. And implementation of callbacks described above.

When RESTinio receives new incoming HTTP-request an information about this request is passed to curl_multi_performer_t::process_request method and the following actions are performed:

  • new curl_easy instance are created and tuned (all necessary options and callbacks are set);
  • this curl_easy instance is stored into curl_multi instance.

As a result libcurl starts to perform new outgoing request.

During processing of new outgoing request the following can happen:

  • libcurl can call OPENSOCKETFUNCTION callback to create a socket for this request; libcurl calls SOCKETFUNCTION callback to tell us what is necessary to control. At the beginning of request processing it will be CURL_POLL_OUT or CURL_POLL_INOUT;
  • we tell Asio to control readiness of socket for write operation;
  • when Asio detects that the socket is ready it calls our callback (which is implemented as curl_multi_performer_t::event_cb method). We call curl_multi_socket_action inside event_cb;
  • libcurl can call SOCKETFUNCTION from inside of curl_multi_socket_action. For example if libcurl sends the whole request and wants to receive a response it will pass CURL_POLL_IN or CURL_POLL_INOUT to the callback. We tell Asio to control readiness of socket for read operation. Remember: we are still inside curl_multi_socket_action which is called from event_cb...
  • after calling curl_multi_socket_action we also call curl_multi_info_read to check of completeness of any request. For every completed request a response for the corresponding incoming request is generated.

And there is a TIMERFUNCTION callback implemented by curl_multi_performer_t::timer_function method. This callback fires Asio's timer. When this timer expires we call curl_multi_socket_action with special CURL_SOCKET_TIMEOUT value and then check for completed requests by calling curl_multi_info_read.

That's almost all. But it is necessary to mention yet another thing: curl_multi is not a thread-safe object. But we work with it on thread pool. To protect curl_multi we use Asio's strand object and schedule all action on curl_multi instance only via this strand (see an example here).


We implemented two very simple schemes without handling some edge cases (like error handling or more precise detection of moments when curl_multi_info_read should be called). But our goal was to create a simple and understandable examples of integration of libcurl with RESTinio. We found that official documentation for libcurl and examples in libcurl distribution leave too much open questions to developers. And there is no much information about curl_multi in the Internet. So we hope that our examples and our explanation will be useful for somebody.

Отправить комментарий