Intro#

fastapi is a python library that allows you to build APIs on top of python.

Find out more here:

import requests

Run application#

To ensure all notebooks on this site are runnable, I’ve implemented a solution using Docker containers. This approach allows for efficient execution and prevents notebook cell stacking, providing a seamless experience. Below is a comprehensive list of requirements for running the application.

For detailed instructions on running a FastAPI application, please refer to the specific page.


If you haven’t already built a container with your FastAPI application, you’ll need to do so.

!docker build -t fastapi_experiment run_application_files &> /dev/null

Create a file containing the application you want to play with.

%%writefile /tmp/get_started.py
from fastapi import FastAPI

my_first_app = FastAPI()

@my_first_app.get("/")
def say_hello():
    return "hello"
Overwriting /tmp/get_started.py

Now you need to start the container. There are a few important features to consider:

  • -v option: Specifies the location of the file containing your program, allowing you to substitute a specific file.

  • Command to execute: uvicorn --host 0.0.0.0 --reload get_started:my_first_app

    • --host 0.0.0.0: Makes the API accessible from the host.

    • --reload: Automatically applies changes to the API whenever you modify the application file.

!docker run --rm -itd\
    --name test_container\
    -v /tmp/get_started.py:/get_started.py\
    -p 8000:8000 \
    fastapi_experiment \
    uvicorn --host 0.0.0.0 --reload get_started:my_first_app >/dev/null

Now you can test that everything is working correctly by making a request to the newly created API.

requests.get("http://localhost:8000/").content
b'"hello"'

We received a response that matches the code we just wrote.

Finally, don’t forget to stop the container when you’re finished.

!docker stop test_container
test_container

Requests#

There are numerous ways to organize requests to an application. This section presents different options. For more details check sepcific page.


The following cell defines an application that requires query parameters as part of the request. These parameters, a: int and b: str, are specified in the function decorated as an endpoint. The function returns a message that corresponds to the provided arguments.

%%writefile /tmp/get_started.py
from fastapi import FastAPI

my_first_app = FastAPI()

@my_first_app.get("/")
def index(a: int, b: str):
    return f"a = {a}, b = {b}"
Overwriting /tmp/get_started.py

Now we need to make a request to the API with the query parameters a and b defined.

requests.get("http://localhost:8000/?a=3&b=name").content
b'"a = 3, b = name"'

As a result, we received a response that contains the specified inputs.

Responses#

FastAPI handles API output, returning Python objects or using special wrappers. Output annotations also play a role. For more details check special page.


The following example shows an API that returns a simple Python dictionary as its output.

%%writefile /tmp/get_started.py
from fastapi import FastAPI

my_first_app = FastAPI()

@my_first_app.get("/")
def get_dict():
    return {"a": "value", "b": 32}
Overwriting /tmp/get_started.py

If we request this API, we would receive the corresponding JSON data in response.

requests.get("http://localhost:8000").content
b'{"a":"value","b":32}'

Logging#

Uvicorn and FastAPI generate their own logs. It’s useful to configure these logs to follow the same rules as the logs for the rest of the program.

For details check this page.


The following cell shows option how you can check logs of the fastAPI application. And shows you typical fastapi logs.

!docker logs test_container | tail -n 10
INFO:     172.17.0.1:51846 - "GET /?a=3&b=name HTTP/1.1" 200 OK
WARNING:  StatReload detected changes in 'get_started.py'. Reloading...
INFO:     Shutting down
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.
INFO:     Finished server process [9]
INFO:     Started server process [11]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     172.17.0.1:33888 - "GET / HTTP/1.1" 200 OK

Now let’s try to send a request to the application and check its logs after the request.

requests.get("http://localhost:8000/")
<Response [200]>
!docker logs test_container | tail -n 10
WARNING:  StatReload detected changes in 'get_started.py'. Reloading...
INFO:     Shutting down
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.
INFO:     Finished server process [9]
INFO:     Started server process [11]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     172.17.0.1:33888 - "GET / HTTP/1.1" 200 OK
INFO:     172.17.0.1:49938 - "GET / HTTP/1.1" 200 OK

So there is a new line in the application logs.

Cache#

A common way to optimize applications is by using caching, which involves storing the outputs of certain processes for a set period. For FastAPI, the fastapi_cache library provides a convenient way to implement caching, simplifying the process significantly.

For more details check corresponding page.


Consider an example of a very basic program using fastapi_cache. Note: running this program requires setup described in the specific page.

By design, this programme returns a random number in response to a query on the root path.

%%writefile fastapi/cache_files/app.py
from random import random

from fastapi import FastAPI

from fastapi_cache import FastAPICache
from fastapi_cache.decorator import cache
from fastapi_cache.backends.redis import RedisBackend

from redis import asyncio as aioredis

redis = aioredis.from_url("redis://localhost:6380")
FastAPICache.init(RedisBackend(redis), prefix="fastapi-cache")

app = FastAPI()

@app.get("/")
@cache(expire=600)
def index():
    return random()
Overwriting fastapi/cache_files/app.py

The following cell runs API.

%%bash
cd fastapi/cache_files/
docker compose up -d &> /dev/null

Now we can try to access ot it’s endpoint.

%%bash
curl -s localhost:8000
echo
curl -s localhost:8000
0.23100308576336648
0.23100308576336648

Subsequent requests will be identical to the first, as the value has been cached.

%%bash
cd fastapi/cache_files/
docker compose down &> /dev/null

Asynchrony#

FastAPI is a framework whose main goal is to build high-performance applications. It achieves this through asynchrony, which means that you can send requests while others are still processing - they will be executed in parallel.


Consider the simple experiment that proves the asynchrony nature of the FastAPI application. This application simply prints to the stdout a few messages associated with the passed request_id.

%%writefile /tmp/get_started.py
from time import sleep
from fastapi import FastAPI

my_first_app = FastAPI()

@my_first_app.get("/")
def endpoint(req_id: int):
    for i in range(3):
        sleep(0.1)
        print(f"request {req_id}, index {i}")
Overwriting /tmp/get_started.py

The following cell makes asynchronomouse requests to that application.

import httpx
import asyncio

client = httpx.AsyncClient()

await asyncio.gather(
    client.get("http://localhost:8000/?req_id=1"),
    client.get("http://localhost:8000/?req_id=2")
)
[<Response [200 OK]>, <Response [200 OK]>]

In the server’s stdout messages corresponding to the same request are mixed together, meaning that they were executed in parallel.

!docker logs test_container | tail -n 10
INFO:     Waiting for application startup.
INFO:     Application startup complete.
request 1, index 0
request 2, index 0
request 1, index 1
request 2, index 1
request 1, index 2
INFO:     172.17.0.1:33508 - "GET /?req_id=1 HTTP/1.1" 200 OK
request 2, index 2
INFO:     172.17.0.1:33516 - "GET /?req_id=2 HTTP/1.1" 200 OK