Part 9 - How ASGI extends on WSGI concepts
As we’ve seen, WSGI
works really well for the classical HTTP
request/response cycle. A client sends a single request, the server creates some useful response and sends it back. 1 request, 1 response, end of communication. If further information should be exchanged, a new request/response cycle must be started.
But what if you want to have a more long-lived data exchange (e.g. using WebSocket
) or a single request should deliver several resources (e.g. using HTTP server push
). Those ideas just wouldn’t be possible with WSGI
, but they are becoming increasingly important on the modern web.
ASGI to the rescue
That’s why, starting in 2015 a successor to WSGI
was developed: ASGI
(Asynchronous Server Gateway Interface
). The aim was to be a kind-of superset of WSGI
, it should support both the classic request/response cycle, but also the other approaches mentioned above.
Just like in WSGI
there is a server and an application part to ASGI
. A big difference to WSGI
is that the communication between those 2 parts is facilitated using python’s asyncio
and async/await
syntax. I’m not going to go into details about that here, but it allows you to use cooperative concurrency in the server (based on coroutines instead of e.g. threads), which usually also comes with some nice speed improvements over WSGI
web applications.
The flow looks roughly like this:
The events mentioned in the graph are dicts of a specific format that can be used to exchange all kinds of messages between the ASGI
server and app. We’ll look at them in the next post where we actually implement our own little ASGI
server.
In the graph you can also see that the entire ASGI
pipeline is tightly included as part of a python webserver. When we looked at WSGI
we saw that this wasn’t always the case. In fact it was very common for the WSGI
server to be implemented in C
(e.g. as the Apache
extension mod_wsgi
or in a standalone webserver like uWSGI
). This is currently not the case for ASGI
, all serious ASGI
servers in the space are actually hybrid web+ASGI
servers implemented purely in python.
A simple ASGI application
So what does this interface between server and app look like in detail? A very basic dummy ASGI
application that would work for HTTP
would be the following.
async def appplication(scope, receive, send):
# make sure this is a HTTP connection
assert scope["type"] == "http"
await send({
"type": "http.response.start",
"status": 200,
"headers": [
[b"Content-Type", b"text/plain"],
],
})
await send({
"type": "http.response.body",
"body": b"Hello, World!",
"more_body": False,
})
An ASGI
application needs to take 3 inputs: scope
, receive
and send
.
The scope
is a dict with information about the connection (very similar to the environ
dict in WSGI
). As you can see, it also carries a type
field that informs the application about the kind of connection made by the ASGI
server. This is important because the kinds of events/messages exchanged between server and application are quite different for e.g. HTTP
and WebSocket
.
The send
and receive
inputs are each an async
awaitable that are used to exchange events between the ASGI
server and application. In the example you can see that our application doesn’t actually use the receive
callable: it does not care about inputs. Instead it just goes straight ahead with sending 2 events back to the server. The first is very similar to the start_response(status, headers)
call in WSGI
and sends response status and headers, the second one sends the response body and the more_body
field being False
tells the server that this is all the body that the application wanted to send.
ASGI event flow
The receive
and send
awaitables can be awaited as many times as is necessary for a particular protocol. For a basic HTTP
request/response cycle the application may await receive()
until it has gathered the entire request body, then create a response and then send that body out in chunks by await send(event)
as many times as needed to send the entire body.
So for the classic HTTP
case this doesn’t look terribly different compared to WSGI
.
But e.g. for a WebSocket
implementation you could imagine the following sequence that realizes a kind-of indefinite subscription service.
The important thing to notice here is that a response can be “pushed” to the client without the client making an explicit request for it. This would definitely not be possible with the classic HTTP
request/response cycle. A practical example of this would be some kind of messaging service, where new messages are continuously pushed out from the server to the client as soon as they arrive on the server.
Notes
You can look at ASGI
from several different perspectives:
1. it allows the implementation of protocols beyond classic HTTP
2. it allows to use concurrency via cooperative multitasking (i.e. coroutines), which can have speed advantages over e.g. threads (especially when your application scales to a lot of concurrent users)
3. it allows you to easily integrate all kinds of asyncio
-compatible libraries into your ASGI
application
All of those are exciting and ASGI
has seen a surge of adoption throughout the community.
Here is an article by encode (the authors of many important libraries in the ASGI
field) introducing ASGI
with a special focus on the concurrency model it enables and the speed improvements one can get out of that. And here is a post that talks about ASGI
more from the perspective of how it fits into the new async/await
syntax in python. It also gives some examples and an overview of important libraries in the field. And this ASGI
talk is quite informative on all of the above-mentioned points with a special focus on the parallels between WSGI
and ASGI
.
Hopefully this has given you a good overview of the topic and you’re now excited to get your hands dirty in the next post where we’ll actually implement our own, very basic, ASGI
server for HTTP
.