Uvicorn: Complete Guide to a High‑Performance ASGI Server
What Is Uvicorn
Uvicorn is a modern, lightweight, high‑performance ASGI server written in Python. It is specifically designed to run asynchronous web applications and is one of the fastest servers for Python projects. Uvicorn is built on top of the uvloop and httptools libraries, providing exceptional performance when handling large numbers of concurrent requests.
The server supports modern web protocols and technologies, including HTTP/1.1, WebSocket, and can work with any ASGI‑compatible frameworks such as FastAPI, Starlette, Django Channels, and others.
Main Advantages and Features
Architectural Advantages
Uvicorn represents a significant step forward in Python web server technology thanks to the following features:
Core‑level Asynchrony – uses an event‑driven architecture that can handle thousands of simultaneous connections with minimal resource consumption.
High Performance – thanks to integration with uvloop (fast asyncio implementation) and httptools (fast HTTP parser), Uvicorn delivers performance comparable to Go and Node.js servers.
Support for Modern Protocols – native support for WebSocket, HTTP/1.1, and the ability to work with HTTP/2 via additional proxies.
Minimal System Requirements – optimized to run on both local developer machines and production servers.
Technical Features
Uvicorn implements the ASGI 3.0 specification, making it compatible with a wide range of modern Python frameworks. The server supports automatic code‑change detection during development, multi‑process mode for production, and a flexible configuration system.
ASGI vs WSGI Comparison
Understanding the differences between ASGI and WSGI is critical when choosing the right server:
| Feature | WSGI (Web Server Gateway Interface) | ASGI (Asynchronous Server Gateway Interface) |
|---|---|---|
| Execution Model | Synchronous, blocking | Asynchronous, non‑blocking |
| async/await Support | No | Full support |
| WebSocket | Not supported | Native support |
| HTTP/2 | Limited support | Full support |
| Background Tasks | Require separate processes | Built‑in support |
| Performance | Limited by thread count | High thanks to event loop |
| Server Examples | Gunicorn, uWSGI, mod_wsgi | Uvicorn, Hypercorn, Daphne |
| Frameworks | Flask, Django (≤ 3.0), Bottle | FastAPI, Starlette, Django 3.0+, Quart |
ASGI is an evolutionary upgrade of WSGI, designed for modern web applications that require high performance and real‑time capabilities.
Installation and Initial Setup
Installation via pip
pip install uvicorn
Installation with Extra Dependencies
# For maximum performance
pip install uvicorn[standard]
# Includes uvloop, httptools, websockets, watchgod, python-dotenv
Minimal Application Example
Create a file main.py:
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello from Uvicorn", "status": "working"}
@app.get("/health")
async def health_check():
return {"status": "healthy"}
Running the Server
# Basic launch
uvicorn main:app
# Launch with auto‑reload for development
uvicorn main:app --reload
# Bind to all interfaces
uvicorn main:app --host 0.0.0.0 --port 8000
Detailed Overview of Command‑Line Options
Core Launch Parameters
uvicorn main:app \
--host 0.0.0.0 \
--port 8000 \
--reload \
--workers 4 \
--log-level info
Full Options Table
| Option | Description | Default Value |
|---|---|---|
--host |
IP address to bind the server | 127.0.0.1 |
--port |
Port to listen on | 8000 |
--reload |
Auto‑reload on file changes | False |
--workers |
Number of worker processes | 1 |
--log-level |
Logging level (trace, debug, info, warning, error, critical) | info |
--access-log |
Enable access logging | True |
--proxy-headers |
Trust proxy headers (X‑Forwarded‑For, X‑Forwarded‑Proto) | False |
--root-path |
Root path for the application | "" |
--ssl-keyfile |
Path to SSL private key | None |
--ssl-certfile |
Path to SSL certificate | None |
--ssl-version |
SSL protocol version | TLSv1.2 |
--ssl-ca-certs |
Path to CA certificates | None |
--ssl-ciphers |
Allowed SSL cipher suites | TLSv1.2 |
--loop |
Event‑loop type (auto, asyncio, uvloop) | auto |
--http |
HTTP protocol (auto, h11, httptools) | auto |
--ws |
WebSocket protocol (auto, none, websockets, wsproto) | auto |
--lifespan |
ASGI lifespan support (auto, on, off) | auto |
--reload-dir |
Directories to watch for changes | current |
--reload-delay |
Delay before restart (seconds) | 0.25 |
Programmatic Uvicorn Launch
Basic Programmatic Launch
import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Programmatic Uvicorn launch"}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
Extended Configuration
import uvicorn
from fastapi import FastAPI
app = FastAPI()
if __name__ == "__main__":
uvicorn.run(
app,
host="0.0.0.0",
port=8000,
reload=True,
log_level="info",
access_log=True,
workers=1, # workers > 1 incompatible with reload=True
loop="asyncio",
http="auto",
ws="auto",
lifespan="auto",
ssl_keyfile=None,
ssl_certfile=None,
proxy_headers=False,
forwarded_allow_ips=None,
root_path="",
timeout_keep_alive=5,
timeout_notify=30,
limit_concurrency=None,
limit_max_requests=None,
backlog=2048,
reload_dirs=None,
reload_delay=0.25,
reload_excludes=None,
reload_includes=None
)
Integration with Various Frameworks
FastAPI
FastAPI and Uvicorn are built to work together:
from fastapi import FastAPI
from fastapi.responses import JSONResponse
app = FastAPI(title="FastAPI + Uvicorn", version="1.0.0")
@app.get("/")
async def root():
return {"framework": "FastAPI", "server": "Uvicorn"}
@app.get("/items/{item_id}")
async def read_item(item_id: int):
return {"item_id": item_id, "server": "Uvicorn"}
# Run: uvicorn main:app --reload
Starlette
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
async def homepage(request):
return JSONResponse({"message": "Starlette + Uvicorn"})
routes = [
Route("/", homepage),
]
app = Starlette(routes=routes)
# Run: uvicorn main:app --reload
Django ASGI
For Django 3.0+:
# asgi.py
import os
from django.core.asgi import get_asgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myproject.settings')
application = get_asgi_application()
# Run Django with Uvicorn
uvicorn myproject.asgi:application --host 0.0.0.0 --port 8000
Quart
from quart import Quart, jsonify
app = Quart(__name__)
@app.route("/")
async def hello():
return jsonify({"message": "Quart + Uvicorn"})
# Run: uvicorn main:app --reload
Working with WebSocket
Basic WebSocket with FastAPI
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from fastapi.responses import HTMLResponse
app = FastAPI()
html = """
WebSocket Test
"""
@app.get("/")
async def get():
return HTMLResponse(html)
@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
await websocket.accept()
try:
while True:
data = await websocket.receive_text()
await websocket.send_text(f"Echo: {data}")
except WebSocketDisconnect:
print("Client disconnected")
Advanced WebSocket with Connection Management
from fastapi import FastAPI, WebSocket, WebSocketDisconnect
from typing import List
import json
app = FastAPI()
class ConnectionManager:
def __init__(self):
self.active_connections: List[WebSocket] = []
async def connect(self, websocket: WebSocket):
await websocket.accept()
self.active_connections.append(websocket)
def disconnect(self, websocket: WebSocket):
self.active_connections.remove(websocket)
async def send_personal_message(self, message: str, websocket: WebSocket):
await websocket.send_text(message)
async def broadcast(self, message: str):
for connection in self.active_connections:
await connection.send_text(message)
manager = ConnectionManager()
@app.websocket("/ws/{client_id}")
async def websocket_endpoint(websocket: WebSocket, client_id: int):
await manager.connect(websocket)
try:
while True:
data = await websocket.receive_text()
message = json.loads(data)
await manager.send_personal_message(f"Client {client_id}: {message['text']}", websocket)
await manager.broadcast(f"Client {client_id} says: {message['text']}")
except WebSocketDisconnect:
manager.disconnect(websocket)
await manager.broadcast(f"Client {client_id} left the chat")
Production Configuration
Systemd Service
Create the file /etc/systemd/system/myapp.service:
[Unit]
Description=MyApp Uvicorn Service
After=network.target
[Service]
Type=exec
User=www-data
Group=www-data
WorkingDirectory=/var/www/myapp
ExecStart=/var/www/myapp/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000 --workers 4
Restart=always
RestartSec=10
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=myapp
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl daemon-reload
sudo systemctl enable myapp
sudo systemctl start myapp
Supervisor
Configuration in /etc/supervisor/conf.d/myapp.conf:
[program:myapp]
command=/var/www/myapp/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000 --workers 4
directory=/var/www/myapp
user=www-data
autostart=true
autorestart=true
stdout_logfile=/var/log/myapp/uvicorn.log
stderr_logfile=/var/log/myapp/uvicorn.error.log
Docker Container
FROM python:3.11-slim
WORKDIR /app
# Install dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code
COPY . .
# Create a non‑root user for security
RUN useradd --create-home --shell /bin/bash app && chown -R app:app /app
USER app
# Environment variables
ENV PYTHONPATH=/app
ENV PYTHONUNBUFFERED=1
# Expose port
EXPOSE 8000
# Default command
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "4"]
Docker Compose
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- PYTHONUNBUFFERED=1
- DATABASE_URL=postgresql://user:pass@db:5432/mydb
depends_on:
- db
volumes:
- ./app:/app
restart: unless-stopped
db:
image: postgres:13
environment:
- POSTGRES_DB=mydb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
volumes:
- postgres_data:/var/lib/postgresql/data
restart: unless-stopped
nginx:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./ssl:/etc/nginx/ssl
depends_on:
- web
restart: unless-stopped
volumes:
postgres_data:
Reverse Proxy Configuration
Nginx Configuration
upstream uvicorn {
server 127.0.0.1:8000;
server 127.0.0.1:8001;
server 127.0.0.1:8002;
server 127.0.0.1:8003;
}
server {
listen 80;
server_name example.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
server_name example.com;
ssl_certificate /etc/nginx/ssl/cert.pem;
ssl_certificate_key /etc/nginx/ssl/key.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
client_max_body_size 64M;
location / {
proxy_pass http://uvicorn;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_redirect off;
proxy_buffering off;
}
location /ws {
proxy_pass http://uvicorn;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
Traefik Configuration
# docker-compose.yml
version: '3.8'
services:
traefik:
image: traefik:v2.10
command:
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
- "--certificatesresolvers.myresolver.acme.tlschallenge=true"
- "--certificatesresolvers.myresolver.acme.email=admin@example.com"
- "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./letsencrypt:/letsencrypt
web:
build: .
labels:
- "traefik.enable=true"
- "traefik.http.routers.web.rule=Host('example.com')"
- "traefik.http.routers.web.entrypoints=websecure"
- "traefik.http.routers.web.tls.certresolver=myresolver"
- "traefik.http.services.web.loadbalancer.server.port=8000"
restart: unless-stopped
Monitoring and Logging
Structured Logging
import logging
import json
import uvicorn
from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse
import time
# Logging configuration
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
app = FastAPI()
class StructuredLogger:
def __init__(self):
self.logger = logging.getLogger("structured")
def log_request(self, request: Request, response_time: float, status_code: int):
log_data = {
"timestamp": time.time(),
"method": request.method,
"url": str(request.url),
"user_agent": request.headers.get("user-agent"),
"ip": request.client.host,
"response_time": response_time,
"status_code": status_code
}
self.logger.info(json.dumps(log_data))
structured_logger = StructuredLogger()
@app.middleware("http")
async def log_requests(request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
structured_logger.log_request(request, process_time, response.status_code)
return response
@app.get("/")
async def root():
return {"message": "Logging works"}
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000, log_level="info")
Prometheus Integration
from prometheus_client import Counter, Histogram, generate_latest
from fastapi import FastAPI, Request, Response
from fastapi.responses import PlainTextResponse
import time
REQUEST_COUNT = Counter('requests_total', 'Total requests', ['method', 'endpoint'])
REQUEST_LATENCY = Histogram('request_duration_seconds', 'Request latency')
app = FastAPI()
@app.middleware("http")
async def prometheus_middleware(request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
REQUEST_COUNT.labels(method=request.method, endpoint=request.url.path).inc()
REQUEST_LATENCY.observe(process_time)
return response
@app.get("/metrics")
async def metrics():
return PlainTextResponse(generate_latest())
@app.get("/")
async def root():
return {"message": "Metrics available at /metrics"}
Application Testing
Testing with pytest
import pytest
from fastapi.testclient import TestClient
from main import app
client = TestClient(app)
def test_read_root():
response = client.get("/")
assert response.status_code == 200
assert response.json() == {"message": "Hello from Uvicorn", "status": "working"}
def test_health_check():
response = client.get("/health")
assert response.status_code == 200
assert response.json() == {"status": "healthy"}
@pytest.mark.asyncio
async def test_websocket():
with client.websocket_connect("/ws") as websocket:
websocket.send_text("test message")
data = websocket.receive_text()
assert data == "Echo: test message"
Load Testing
import asyncio
import aiohttp
import time
async def fetch(session, url):
async with session.get(url) as response:
return await response.json()
async def load_test():
url = "http://localhost:8000/"
tasks = []
async with aiohttp.ClientSession() as session:
start_time = time.time()
for i in range(1000):
task = asyncio.create_task(fetch(session, url))
tasks.append(task)
results = await asyncio.gather(*tasks)
end_time = time.time()
print(f"Completed {len(results)} requests in {end_time - start_time:.2f} seconds")
print(f"RPS: {len(results) / (end_time - start_time):.2f}")
if __name__ == "__main__":
asyncio.run(load_test())
Performance Optimization
Configuring Worker Processes
import multiprocessing
import uvicorn
from fastapi import FastAPI
app = FastAPI()
def get_optimal_workers():
"""Calculate optimal number of workers"""
cpu_count = multiprocessing.cpu_count()
return (cpu_count * 2) + 1
@app.get("/")
async def root():
return {"message": "Optimized server"}
if __name__ == "__main__":
workers = get_optimal_workers()
uvicorn.run(
"main:app",
host="0.0.0.0",
port=8000,
workers=workers,
loop="uvloop",
http="httptools",
access_log=False, # Disable for performance
timeout_keep_alive=30
)
Optimization for Specific Workloads
import uvicorn
from fastapi import FastAPI
from fastapi.responses import JSONResponse
app = FastAPI()
# High‑load configuration
PRODUCTION_CONFIG = {
"host": "0.0.0.0",
"port": 8000,
"workers": 4,
"loop": "uvloop",
"http": "httptools",
"access_log": False,
"timeout_keep_alive": 30,
"limit_concurrency": 1000,
"backlog": 2048
}
# WebSocket‑focused configuration
WEBSOCKET_CONFIG = {
"host": "0.0.0.0",
"port": 8000,
"workers": 1, # WebSocket typically needs a single process
"loop": "uvloop",
"ws": "websockets",
"timeout_keep_alive": 60
}
@app.get("/")
async def root():
return JSONResponse({"message": "High‑performance server"})
if __name__ == "__main__":
uvicorn.run("main:app", **PRODUCTION_CONFIG)
Uvicorn Methods and Functions
Core Methods and Classes
| Method/Class | Description | Usage Example |
|---|---|---|
uvicorn.run() |
Main function to start the server | uvicorn.run(app, host="0.0.0.0", port=8000) |
uvicorn.Config() |
Server configuration class | config = uvicorn.Config(app, host="0.0.0.0") |
uvicorn.Server() |
Server class for advanced control | server = uvicorn.Server(config) |
uvicorn.main() |
CLI entry point | Invoked via command line |
uvicorn.workers.UvicornWorker |
Gunicorn worker class | gunicorn -k uvicorn.workers.UvicornWorker |
Detailed uvicorn.run() Parameters
| Parameter | Type | Description | Default |
|---|---|---|---|
app |
ASGI application | Main ASGI app | Required |
host |
str | IP address to bind | "127.0.0.1" |
port |
int | Port to listen on | 8000 |
uds |
str | Unix domain socket | None |
fd |
int | File descriptor | None |
loop |
str | Event‑loop type | "auto" |
http |
str | HTTP protocol | "auto" |
ws |
str | WebSocket protocol | "auto" |
ws_max_size |
int | Maximum WebSocket message size | 16777216 |
ws_ping_interval |
float | WebSocket ping interval | 20.0 |
ws_ping_timeout |
float | WebSocket ping timeout | 20.0 |
lifespan |
str | ASGI lifespan handling | "auto" |
interface |
str | ASGI interface | "auto" |
reload |
bool | Auto‑reload on changes | False |
reload_dirs |
list | Directories to watch | None |
reload_includes |
list | Files to include in watch list | None |
reload_excludes |
list | Files to exclude from watch list | None |
reload_delay |
float | Delay before restart | 0.25 |
workers |
int | Number of worker processes | 1 |
env_file |
str | Path to .env file | None |
log_config |
dict/str | Logging configuration | None |
log_level |
str | Logging level | "info" |
access_log |
bool | Enable access logging | True |
use_colors |
bool | Use colored output | None |
proxy_headers |
bool | Trust proxy headers | False |
server_header |
bool | Include Server header | True |
date_header |
bool | Include Date header | True |
forwarded_allow_ips |
str | Allowed IPs for X‑Forwarded‑For | None |
root_path |
str | Root path of the application | "" |
limit_concurrency |
int | Maximum concurrent connections | None |
limit_max_requests |
int | Maximum number of requests per worker | None |
timeout_keep_alive |
int | Keep‑alive timeout | 5 |
timeout_notify |
int | Shutdown notification timeout | 30 |
callback_notify |
callable | Notification callback | None |
ssl_keyfile |
str | Path to SSL key | None |
ssl_certfile |
str | Path to SSL certificate | None |
ssl_keyfile_password |
str | Password for SSL key | None |
ssl_version |
int | SSL version | ssl.PROTOCOL_TLS_SERVER |
ssl_cert_reqs |
int | Certificate requirements | ssl.CERT_NONE |
ssl_ca_certs |
str | Path to CA certificates | None |
ssl_ciphers |
str | Allowed ciphers | "TLSv1.2" |
headers |
list | Additional response headers | None |
factory |
bool | Treat app as a factory | False |
backlog |
int | Connection queue size | 2048 |
Programmatic Interfaces
import uvicorn
from fastapi import FastAPI
app = FastAPI()
# Method 1: Simple launch
uvicorn.run(app)
# Method 2: Config object
config = uvicorn.Config(
app,
host="0.0.0.0",
port=8000,
log_level="info",
access_log=True
)
server = uvicorn.Server(config)
# Method 3: Async launch
import asyncio
async def main():
config = uvicorn.Config(app, host="0.0.0.0", port=8000)
server = uvicorn.Server(config)
await server.serve()
# Method 4: Lifecycle management
async def startup():
config = uvicorn.Config(app, host="0.0.0.0", port=8000)
server = uvicorn.Server(config)
# Run in background
await server.startup()
# Shut down
await server.shutdown()
if __name__ == "__main__":
asyncio.run(main())
Comparison with Other Servers
Performance
| Server | Type | Requests/sec | Memory (MB) | WebSocket Support |
|---|---|---|---|---|
| Uvicorn | ASGI | 20000‑40000 | 15‑25 | Yes |
| Gunicorn | WSGI | 5000‑15000 | 30‑50 | No |
| Hypercorn | ASGI | 18000‑35000 | 20‑30 | Yes + HTTP/2 |
| Daphne | ASGI | 8000‑15000 | 25‑40 | Yes |
| uWSGI | WSGI | 10000‑20000 | 20‑35 | Limited |
Feature Matrix
| Feature | Uvicorn | Gunicorn | Hypercorn | Daphne |
|---|---|---|---|---|
| HTTP/1.1 | ✓ | ✓ | ✓ | ✓ |
| HTTP/2 | Via proxy | No | ✓ | ✓ |
| WebSocket | ✓ | No | ✓ | ✓ |
| Auto‑reload | ✓ | No | ✓ | ✓ |
| Multiple workers | ✓ | ✓ | ✓ | ✓ |
| SSL/TLS | ✓ | ✓ | ✓ | ✓ |
| Performance | High | Medium | High | Medium |
Security
SSL/TLS Configuration
import uvicorn
from fastapi import FastAPI
import ssl
app = FastAPI()
# SSL configuration
SSL_CONFIG = {
"ssl_keyfile": "/path/to/private.key",
"ssl_certfile": "/path/to/certificate.crt",
"ssl_ca_certs": "/path/to/ca-bundle.crt",
"ssl_version": ssl.PROTOCOL_TLSv1_2,
"ssl_cert_reqs": ssl.CERT_REQUIRED,
"ssl_ciphers": "HIGH:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!SRP:!CAMELLIA"
}
@app.get("/")
async def root():
return {"message": "Secure connection"}
if __name__ == "__main__":
uvicorn.run(
app,
host="0.0.0.0",
port=443,
**SSL_CONFIG
)
Protection Against Attacks
from fastapi import FastAPI, Request, HTTPException
from fastapi.middleware.cors import CORSMiddleware
from fastapi.middleware.trustedhost import TrustedHostMiddleware
from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
app = FastAPI()
# Rate limiting
limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
# CORS
app.add_middleware(
CORSMiddleware,
allow_origins=["https://example.com"],
allow_credentials=True,
allow_methods=["GET", "POST"],
allow_headers=["*"],
)
# Trusted hosts
app.add_middleware(
TrustedHostMiddleware,
allowed_hosts=["example.com", "*.example.com"]
)
@app.get("/")
@limiter.limit("10/minute")
async def root(request: Request):
return {"message": "Protected endpoint"}
Frequently Asked Questions
What is Uvicorn and why do I need it?
Uvicorn is a high‑performance ASGI server for Python designed to run asynchronous web applications. It is required to serve modern frameworks such as FastAPI, Starlette, and Django with async support. Uvicorn delivers substantially higher throughput compared to traditional WSGI servers.
How does Uvicorn differ from Gunicorn?
The main differences lie in architecture and capabilities. Uvicorn supports asynchronous programming, WebSocket, and offers higher performance thanks to its event‑driven design. Gunicorn is a synchronous WSGI server but can use Uvicorn as a worker to gain ASGI support.
Can I use Uvicorn with Django?
Yes. Starting with Django 3.0, the framework includes ASGI support. You can run Django via Uvicorn by pointing to the project’s asgi.py entry point, enabling async views and WebSocket handling.
How should I configure Uvicorn for production?
For production you should: run multiple workers, place Uvicorn behind a reverse proxy (e.g., Nginx), disable debug mode, enable SSL/TLS, manage processes with systemd or Supervisor, and set up monitoring and structured logging.
Does Uvicorn support HTTP/2?
Uvicorn does not natively support HTTP/2, but it can operate behind a reverse proxy (such as Nginx) that terminates HTTP/2. For native HTTP/2 support you may consider Hypercorn as an alternative.
How does auto‑reload work in Uvicorn?
Auto‑reload is activated with the --reload flag. Uvicorn watches Python source files for changes and restarts the process automatically. This feature is intended for development only and should not be used in production.
How many workers should I use for optimal performance?
A common rule of thumb is (CPU cores × 2) + 1. The ideal count depends on the workload: I/O‑bound applications often benefit from more workers, while CPU‑bound workloads may require fewer.
How do I serve static files with Uvicorn?
Uvicorn itself is not meant for serving static assets. Use a CDN or a reverse proxy (e.g., Nginx) for production static files. FastAPI’s StaticFiles can be used for development convenience.
Conclusion
Uvicorn is a modern solution for running high‑performance web applications on Python. Its asynchronous architecture, support for contemporary protocols, and seamless integration with popular frameworks make it an ideal choice for building modern web services.
The server delivers excellent performance, straightforward configuration, and a rich set of options, making it suitable for both development and production environments. WebSocket support, integration with monitoring tools, and the ability to scale via multiple workers enable the creation of reliable and fast web applications.
Choosing Uvicorn as your ASGI server ensures long‑term compatibility with evolving web standards and guarantees high throughput for your Python applications.
$
The Future of AI in Mathematics and Everyday Life: How Intelligent Agents Are Already Changing the Game
Experts warned about the risks of fake charity with AI
In Russia, universal AI-agent for robots and industrial processes was developed