Manage Prompts¶
All LLM prompts are stored in the database and versioned — never hardcoded in application code.
How Prompt Versioning Works¶
Each agent has a row in the agent_prompts table with:
agent_name— identifies the agent (e.g.,factuality_checker)version— integer version numbermodel— the Claude model to use (e.g.,claude-sonnet-4-6)system_prompt— the full system prompt textactive— boolean flag; only one version per agent is active at a time
When an agent runs, call_agent() loads the active prompt for that agent from the database.
Updating a Prompt¶
Prompts are updated via Alembic data migrations. This ensures:
- Every prompt change is tracked in version control
- Changes can be rolled back
- Staging and production stay in sync via
alembic upgrade head
To update a prompt:
- Create a new migration (see Database Migrations)
- In
upgrade(): deactivate the old version, insert the new one - In
downgrade(): delete the new version, reactivate the old one - Apply:
uv run alembic upgrade head
Viewing Prompts in the Dashboard¶
The Prompts page in the Streamlit dashboard lets you:
- View all agent prompts and their versions
- See which version is currently active
- Compare prompt text across versions
Navigate to the Prompts page from the sidebar after logging in.
Prompt Conventions¶
- Use clear, structured system prompts with numbered steps
- Include output format specifications (JSON schema)
- For medical content, include safety language and hedging requirements
- Reference the agent's Pydantic output schema in the prompt so the model knows the expected structure