One API · Four runtimes · Real agent infrastructure

Build agents once.
Choose the runtime later.

Cognitia is a Python framework for teams that want production-ready agents without hard-coding the whole stack to one SDK, one provider, or one orchestration style. Start with a simple facade. Add tools, memory, sessions, workflows, and runtime-specific power only when you need them.

Thin runtime Claude SDK CLI runtime DeepAgents SQLite / PostgreSQL memory
Why Cognitia

Designed for the moment when “just call the model” stops being enough.

Most agent projects become difficult when they need runtime portability, tool policies, persistent state, multi-turn sessions, or workflow control. Cognitia gives you one stable application surface and lets the infrastructure evolve underneath it.

Stable facade, swappable engine

Start with one `Agent` API, then switch runtimes as requirements change: thin runtime for speed, Claude SDK for Claude-native workflows, CLI for existing agent tools, DeepAgents for graph-heavy paths.

Memory and sessions built in

Use in-memory storage for prototypes, SQLite for local products, or PostgreSQL in production. Keep session state, facts, summaries, and runtime history behind the same protocol surface.

Guardrails without ceremony

Add default-deny tool policy, security middleware, cost budgets, sandboxed execution, and explicit runtime contracts without inventing a second framework around your app.

Structured, observable, testable

Stream typed events, return structured output, hook into observability, and test portable behavior across runtimes instead of coupling tests to one provider SDK.

Runtime Matrix

Choose the execution model that matches the job.

Cognitia is not one runtime pretending to fit every workload. It gives you a consistent application layer and lets you pick the runtime with the right trade-offs.

Thin runtime

Default

Fastest path to a working agent with tools, structured output, streaming, and provider portability.

Best for New apps, internal assistants, product backends
Providers Anthropic, OpenAI-compatible, Gemini, DeepSeek, OpenRouter

Claude SDK

Claude-native

Use the Claude agent surface while keeping Cognitia’s facade, sessions, middleware, and docs model.

Best for Claude-centric workflows and Claude tool ecosystems
Strength Native Claude execution semantics with a cleaner top-level app API

CLI runtime

Wrap existing agents

Expose an NDJSON-emitting CLI as a Cognitia runtime and keep the same `query`, `stream`, and session surface.

Best for Teams that already trust a CLI agent and want a cleaner integration layer
Strength Preserve external toolchains without rewriting your whole app

DeepAgents

Graph-heavy

Bring in deeper workflow and graph semantics while preserving a simpler application entrypoint for the rest of your codebase.

Best for Research systems, agent teams, graph-native orchestration
Strength Portable facade first, native graph power where you actually need it
Capabilities

Add the pieces you need, not a monolith you have to work around.

Cognitia stays modular: bring tools, memory, orchestration, structured output, web access, and production safety in gradually instead of adopting a giant runtime-shaped abstraction on day one.

Tooling surface

Register Python functions with `@tool`, apply policy and middleware, then keep the tool contract stable across runtimes.

Open tools and skills

Structured output

Return validated Pydantic or JSON Schema output without abandoning your runtime portability.

Open structured output

Memory providers

Promote the same app from in-memory prototypes to SQLite or PostgreSQL without replacing your agent surface.

Open memory providers

Sessions and history

Keep multi-turn context, runtime history, and rehydration paths explicit instead of hiding state in framework internals.

Open sessions

Workflows and multi-agent

Coordinate teams, queues, and workflow graphs without tying application code to one orchestration stack.

Open multi-agent docs

Observability and hooks

Track event streams, middleware, and lifecycle hooks so the agent system stays explainable in production.

Open observability
Use cases

Useful when your agent has to survive real product constraints.

Cognitia is strongest when you need runtime portability and long-lived agent behavior, not just a single provider call hidden inside one helper function.

Internal copilots

Build assistants for operations, support, or internal tooling with tools, memory, and guardrails already wired in.

Tools Sessions Policy

Research and analysis

Combine web access, memory, and structured output for reports, briefs, research copilots, and analysis workflows.

Web RAG Structured output

Provider-agnostic backends

Keep product APIs stable while swapping models or runtimes for cost, latency, compliance, or operational reasons.

Thin OpenRouter Runtime portability

Multi-agent systems

Coordinate teams, queues, and workflow graphs without forcing the whole application into a graph-native mental model.

Task queue Teams Workflow graph
Quick win

Start small, then scale the system around it.

The fastest path is still simple: create an agent, ask a question, then add tools, sessions, or a different runtime when the product actually needs them.

1

Ask one useful question

from cognitia import Agent, AgentConfig

agent = Agent(
    AgentConfig(
        system_prompt="You summarize release notes for engineers.",
        runtime="thin",
    )
)

result = await agent.query("Summarize the last deployment in 5 bullets.")
print(result.text)
2

Add one tool

from cognitia import tool

@tool
async def get_ticket_status(ticket_id: str) -> str:
    return f"{ticket_id}: in review"

agent = Agent(
    AgentConfig(
        system_prompt="You are a release assistant.",
        runtime="thin",
        tools=(get_ticket_status,),
    )
)
3

Keep the conversation alive

async with agent.conversation() as conv:
    await conv.say("My team owns checkout.")
    result = await conv.say("Which team owns checkout?")
    print(result.text)
Learn fast

Use the docs by intent, not by file dump.

If you are evaluating the library, these are the shortest paths to understanding what it does, how to wire it, and where it fits in a real product stack.

Getting Started

Install, set credentials, create your first agent, and understand the default happy path.

Read the quick start

Runtimes

See which runtime to use first, what changes across them, and where each one shines.

Read the runtime guide

Use Cases

Map your product idea to the right runtime, storage, and capability set before you overbuild.

Open use cases

Cookbook

Copy-paste recipes for tools, structured output, streaming, and common application patterns.

Open recipes

Architecture

Understand the protocol-first layering and how Cognitia keeps domain and infrastructure separated.

Open architecture

Credentials & Providers

Wire env vars and provider settings correctly for Thin, Claude SDK, CLI, and DeepAgents paths.

Open provider setup

Good fit if you want agent infrastructure without framework lock-in.

Cognitia is strongest when you care about runtime portability, session state, tools, storage, and gradual adoption. Start with the default facade now, then layer in the rest only when your product actually demands it.