Inferring Router

Source module: fastapi_utils.inferring_router


Using response_model

One of the few places where FastAPI can’t always infer everything you might want it to purely from type hints is the response_model to use for a specific endpoint.

from fastapi import FastAPI

app = FastAPI()


@app.get("/default")
def get_resource(resource_id: int) -> str:
    # the response will be serialized as a JSON number, *not* a string
    return resource_id


def get_response_schema(openapi_spec, endpoint_path):
    responses = openapi_spec["paths"][endpoint_path]["get"]["responses"]
    return responses["200"]["content"]["application/json"]["schema"]


openapi_spec = app.openapi()
assert get_response_schema(openapi_spec, "/default") == {}

The reason for this is that you may want to return an object of a different type than the endpoint’s response_model, but still have it serialized as though it were an instance of the response_model.

Even when the returned object and the response_model are pydantic models, specifying a response_model ensures that no extra nested attributes will be included. This could be important for security reasons if the returned object has sensitive fields you don’t want to include in the response.

However, this can result in surprising errors where you refactor an endpoint to return a different model, but forget to change the specified response_model, and FastAPI serializes (or attempts to serialize) the response as an undesired type.

Inferring response_model

If you know that you want to use the annotated return type as the response_model (for serialization purposes or for OpenAPI spec generation), you can use a fastapi_utils.inferring_router.InferringRouter in place of an APIRouter, and the response_model will be automatically extracted from the annotated return type.

As you can see below, by default, no response schema is generated when you don’t specify a response_model:

from fastapi import FastAPI

app = FastAPI()


@app.get("/default")
def get_resource(resource_id: int) -> str:
    # the response will be serialized as a JSON number, *not* a string
    return resource_id


def get_response_schema(openapi_spec, endpoint_path):
    responses = openapi_spec["paths"][endpoint_path]["get"]["responses"]
    return responses["200"]["content"]["application/json"]["schema"]


openapi_spec = app.openapi()
assert get_response_schema(openapi_spec, "/default") == {}

However, using InferringRouter, a response schema is generated by default:

from fastapi import FastAPI

from fastapi_utils.inferring_router import InferringRouter

app = FastAPI()


@app.get("/default")
def get_resource(resource_id: int) -> str:
    # the response will be serialized as a JSON number, *not* a string
    return resource_id


router = InferringRouter()


@router.get("/inferred")
def get_resource(resource_id: int) -> str:
    # thanks to InferringRouter, the response will be serialized as a string
    return resource_id


app.include_router(router)


def get_response_schema(openapi_spec, endpoint_path):
    responses = openapi_spec["paths"][endpoint_path]["get"]["responses"]
    return responses["200"]["content"]["application/json"]["schema"]


openapi_spec = app.openapi()
assert get_response_schema(openapi_spec, "/default") == {}
assert get_response_schema(openapi_spec, "/inferred")["type"] == "string"

Behind the scenes, what happens is precisely equivalent to what would happen if you passed the annotated return type as the response_model argument to the endpoint decorator. So the annotated return type will also be used for serialization, etc.

Note that InferringRouter has precisely the same API for all methods as a regular APIRouter, and you can still manually override the provided response_model if desired.