Build agents once.
Choose the runtime later.
Cognitia is a Python framework for teams that want production-ready agents without hard-coding the whole stack to one SDK, one provider, or one orchestration style. Start with a simple facade. Add tools, memory, sessions, workflows, and runtime-specific power only when you need them.
Designed for the moment when “just call the model” stops being enough.
Most agent projects become difficult when they need runtime portability, tool policies, persistent state, multi-turn sessions, or workflow control. Cognitia gives you one stable application surface and lets the infrastructure evolve underneath it.
Stable facade, swappable engine
Start with one `Agent` API, then switch runtimes as requirements change: thin runtime for speed, Claude SDK for Claude-native workflows, CLI for existing agent tools, DeepAgents for graph-heavy paths.
Memory and sessions built in
Use in-memory storage for prototypes, SQLite for local products, or PostgreSQL in production. Keep session state, facts, summaries, and runtime history behind the same protocol surface.
Guardrails without ceremony
Add default-deny tool policy, security middleware, cost budgets, sandboxed execution, and explicit runtime contracts without inventing a second framework around your app.
Structured, observable, testable
Stream typed events, return structured output, hook into observability, and test portable behavior across runtimes instead of coupling tests to one provider SDK.
Choose the execution model that matches the job.
Cognitia is not one runtime pretending to fit every workload. It gives you a consistent application layer and lets you pick the runtime with the right trade-offs.
Thin runtime
Fastest path to a working agent with tools, structured output, streaming, and provider portability.
Claude SDK
Use the Claude agent surface while keeping Cognitia’s facade, sessions, middleware, and docs model.
CLI runtime
Expose an NDJSON-emitting CLI as a Cognitia runtime and keep the same `query`, `stream`, and session surface.
DeepAgents
Bring in deeper workflow and graph semantics while preserving a simpler application entrypoint for the rest of your codebase.
Add the pieces you need, not a monolith you have to work around.
Cognitia stays modular: bring tools, memory, orchestration, structured output, web access, and production safety in gradually instead of adopting a giant runtime-shaped abstraction on day one.
Tooling surface
Register Python functions with `@tool`, apply policy and middleware, then keep the tool contract stable across runtimes.
Open tools and skillsStructured output
Return validated Pydantic or JSON Schema output without abandoning your runtime portability.
Open structured outputMemory providers
Promote the same app from in-memory prototypes to SQLite or PostgreSQL without replacing your agent surface.
Open memory providersSessions and history
Keep multi-turn context, runtime history, and rehydration paths explicit instead of hiding state in framework internals.
Open sessionsWorkflows and multi-agent
Coordinate teams, queues, and workflow graphs without tying application code to one orchestration stack.
Open multi-agent docsObservability and hooks
Track event streams, middleware, and lifecycle hooks so the agent system stays explainable in production.
Open observabilityUseful when your agent has to survive real product constraints.
Cognitia is strongest when you need runtime portability and long-lived agent behavior, not just a single provider call hidden inside one helper function.
Internal copilots
Build assistants for operations, support, or internal tooling with tools, memory, and guardrails already wired in.
Research and analysis
Combine web access, memory, and structured output for reports, briefs, research copilots, and analysis workflows.
Provider-agnostic backends
Keep product APIs stable while swapping models or runtimes for cost, latency, compliance, or operational reasons.
Multi-agent systems
Coordinate teams, queues, and workflow graphs without forcing the whole application into a graph-native mental model.
Start small, then scale the system around it.
The fastest path is still simple: create an agent, ask a question, then add tools, sessions, or a different runtime when the product actually needs them.
Ask one useful question
from cognitia import Agent, AgentConfig
agent = Agent(
AgentConfig(
system_prompt="You summarize release notes for engineers.",
runtime="thin",
)
)
result = await agent.query("Summarize the last deployment in 5 bullets.")
print(result.text) Add one tool
from cognitia import tool
@tool
async def get_ticket_status(ticket_id: str) -> str:
return f"{ticket_id}: in review"
agent = Agent(
AgentConfig(
system_prompt="You are a release assistant.",
runtime="thin",
tools=(get_ticket_status,),
)
) Keep the conversation alive
async with agent.conversation() as conv:
await conv.say("My team owns checkout.")
result = await conv.say("Which team owns checkout?")
print(result.text) Use the docs by intent, not by file dump.
If you are evaluating the library, these are the shortest paths to understanding what it does, how to wire it, and where it fits in a real product stack.
Getting Started
Install, set credentials, create your first agent, and understand the default happy path.
Read the quick startRuntimes
See which runtime to use first, what changes across them, and where each one shines.
Read the runtime guideUse Cases
Map your product idea to the right runtime, storage, and capability set before you overbuild.
Open use casesCookbook
Copy-paste recipes for tools, structured output, streaming, and common application patterns.
Open recipesArchitecture
Understand the protocol-first layering and how Cognitia keeps domain and infrastructure separated.
Open architectureCredentials & Providers
Wire env vars and provider settings correctly for Thin, Claude SDK, CLI, and DeepAgents paths.
Open provider setupGood fit if you want agent infrastructure without framework lock-in.
Cognitia is strongest when you care about runtime portability, session state, tools, storage, and gradual adoption. Start with the default facade now, then layer in the rest only when your product actually demands it.