Logging
The backend uses Pino — a fast, JSON-structured logger. Every log line automatically includes the current request ID, HTTP method, and path, making it easy to trace a single request across all output.
Setup
File: src/infrastructure/logging/logger.ts
Pino is configured with two simultaneous output streams:
| Stream | Development | Production |
|---|---|---|
| stdout | pino-pretty (colorized, human-readable) | Raw JSON (one line per entry) |
| in-memory buffer | Always active | Always active |
The minimum log level is controlled by the LOG_LEVEL environment variable (default: info).
Request context
File: src/infrastructure/http/requestContext.ts
Each incoming request gets a short random reqId (8 hex chars) stored in Node's AsyncLocalStorage. The logger's mixin() function reads from this storage on every log call and automatically injects:
| Field | Example | Source |
|---|---|---|
reqId | "a3f9c1b2" | AsyncLocalStorage per request |
method | "POST" | AsyncLocalStorage per request |
path | "/api/scenes/42" | AsyncLocalStorage per request |
No manual passing of context is needed — import the logger and call it directly:
import logger from '../../infrastructure/logging/logger';
logger.info({ sceneId }, 'Scene updated');
logger.warn({ userId }, 'Permission denied');
logger.error({ err }, 'Unexpected failure');
Log levels
| Level name | Numeric | When to use |
|---|---|---|
trace | 10 | Very fine-grained, usually disabled |
debug | 20 | Development diagnostics |
info | 30 | Normal operational events |
warn | 40 | Unexpected but recoverable situations |
error | 50 | Failures that need attention |
fatal | 60 | Unrecoverable errors |
In-memory log buffer
File: src/infrastructure/logging/logBuffer.ts
All log lines are also written to an in-memory circular buffer that holds the last 500 entries. This buffer is exposed to admins via the API without requiring SSH or file access to the server.
Admin API
| Method | Endpoint | Permission | Description |
|---|---|---|---|
GET | /api/logs?level=<num>&limit=<n> | admin:logs:view | Return buffered log entries, optionally filtered by minimum level |
DELETE | /api/logs | admin:logs:clear | Clear the buffer |
GET | /api/logs/llm-dump?reqId=<id> | admin:logs:view | Download the LLM request/response dump for a specific request |
GET | /api/logs/elevenlabs-dump?reqId=<id> | admin:logs:view | Download the ElevenLabs dump for a specific request |
The level query parameter accepts numeric values (e.g. ?level=40 for warnings and above).
Storing JSON debug files
When you need to persist a full request/response payload for debugging an external service call, write it to disk via FileStorage using the same reqId and request context already on the logger. The file is then retrievable through the admin API using that reqId.
The pattern — write the JSON, swallow any errors so a dump failure never crashes the main flow:
import { FileStorage } from '../storage/FileStorage';
import { getCurrentRequestId, getCurrentRequestMeta } from '../http/requestContext';
const storage = new FileStorage();
async function saveDebugDump(subDir: string, data: Record<string, unknown>): Promise<void> {
const now = new Date();
const date = now.toISOString().slice(0, 10);
const ts = now.toISOString().replace(/[:.]/g, '-');
const reqId = getCurrentRequestId() ?? 'no-req';
const meta = getCurrentRequestMeta();
const content = JSON.stringify(
{ timestamp: now.toISOString(), reqId, endpoint: meta?.path, method: meta?.method, ...data },
null,
2
);
await storage.save(
{ buffer: Buffer.from(content, 'utf-8'), originalname: `${ts}__${reqId}.json` },
'/api/logs/my-dump',
`logs/${subDir}/${date}/${meta?.path.split('/').filter(Boolean).pop() ?? 'unknown'}`
);
}
export async function saveMyServiceDump(payload: MyPayload): Promise<void> {
try {
await saveDebugDump('my-service-dumps', { ...payload });
} catch {
// never let dump failures crash the main flow
}
}
Files are saved under:
uploads/logs/{subDir}/{YYYY-MM-DD}/{endpoint}/{timestamp}__{reqId}.json
Examples already in the codebase:
src/infrastructure/llm/LLMDump.ts— saves the model name, prompt, response, latency, and token usage for every LLM call intouploads/logs/llm-dumps/src/infrastructure/elevenlabs/ElevenLabsDump.ts— saves the operation, prompt, response, and latency for every ElevenLabs call intouploads/logs/elevenlabs-dumps/
Both are fetched via the admin API using the reqId from the log output:
GET /api/logs/llm-dump?reqId=a3f9c1b2
GET /api/logs/elevenlabs-dump?reqId=a3f9c1b2