{
  "access": "public",
  "type": "reference",
  "format": "markdown",
  "title": "Core Concepts",
  "chunked": true,
  "url": "https://library.datagrout.ai/core-concepts",
  "summary": "The key abstractions in DataGrout and how they fit together.",
  "content_markdown": "# Core Concepts\n\nThe key abstractions in DataGrout and how they fit together.\n\n---\n\n## Servers\n\nA **server** is your multiplexed endpoint. It aggregates all your integrations behind a single URL. Your agents connect to the server, and the server handles routing to the right integration.\n\nEach server has:\n- A unique UUID and MCP endpoint URL\n- Its own set of integrations\n- Its own policy configuration (side effects, redaction, access controls)\n- Its own access tokens\n\nYou can create multiple servers for different purposes: one for production, one for staging, one per team.\n\n---\n\n## Integrations\n\nAn **integration** connects an external system to your server. This can be:\n- A cloud service (Salesforce, QuickBooks, HubSpot) via OAuth\n- Any MCP-compatible server via URL\n- An on-premise system via a Private Connector\n\nEach integration exposes **tools** that your agents can call. When you add Salesforce, you get tools like `salesforce@1/get_lead@1`, `salesforce@1/create_opportunity@1`, etc.\n\n---\n\n## Tools\n\nA **tool** is a callable operation with a typed schema. Every tool has:\n- A **name** following the pattern `{integration}@{version}/{action}@{version}`\n- A **description** in natural language\n- An **input schema** (JSON Schema) defining required and optional parameters\n- **Annotations** including Semio types, side effect classifications, and cost estimates\n\nDataGrout also provides its own tools (discovery, prism, logic, flow, inspect) that are always available on every server.\n\n---\n\n## Discovery\n\nThe **Discovery Engine** is how agents find and plan with tools. Instead of requiring agents to know which tools exist and how to chain them, Discovery provides:\n\n- **Semantic search**: Find tools by describing what you want to accomplish\n- **Planning**: Build multi-step workflows automatically and verify them before execution\n- **Guided exploration**: Navigate the tool mesh step-by-step with decision trees\n\nDiscovery uses a symbolic planning engine that searches exhaustively over your tool graph. Plans are verified for safety (no cycles, type compatibility, policy compliance, budget constraints) before any tool is called.\n\n---\n\n## Policies\n\n**Policies** control what agents can do. Every server has a policy configuration that includes:\n\n- **Side effect controls**: Restrict tools to none (read-only), read, write, or delete operations\n- **Destructive operation blocking**: Prevent delete/drop/purge operations\n- **PII controls**: Enable automatic detection and redaction of personally identifiable information\n- **Field-level redaction**: Mask specific fields (emails, phone numbers, SSNs) using configurable strategies\n\nPolicies cascade through a hierarchy: server-level defaults, integration-level overrides. A child policy can tighten restrictions but never loosen them.\n\n---\n\n## Cognitive Trust Certificates\n\nA **CTC** is a cryptographic proof that a workflow plan has been verified. When the planning engine validates a multi-step workflow, it issues a CTC containing:\n\n- **Compile-time assurances**: The plan is cycle-free, type-safe, policy-compliant, has available credentials, consumes all required inputs, and executes deterministically\n- **Runtime assurances**: Added after execution to confirm the plan ran as expected\n\nCTCs can be saved as reusable **skills**: verified workflows that can be executed again without re-verification.\n\n---\n\n## Credits\n\nEvery tool call through DataGrout has an associated cost in **credits**. The system provides:\n\n- **Estimates** before execution so your agent knows what a call will cost\n- **Receipts** after execution with the actual cost, broken down by component\n- **Budget controls** to prevent agents from exceeding spending limits\n\nCredits cover the cost of DataGrout's intelligence layer (discovery, planning, verification, redaction). Bring Your Own Key (BYOK) discounts apply when you use your own API keys for upstream services.\n\n---\n\n## Transports\n\nDataGrout supports two transport protocols:\n\n- **MCP (Model Context Protocol)**: The standard protocol for AI tool access. JSON-RPC 2.0 over HTTP with Server-Sent Events for streaming. Use this if your agent framework supports MCP.\n- **JSONRPC**: Plain HTTP POST with JSON-RPC 2.0 payloads. No SSE required. Use this for legacy applications or any system that can make HTTP requests.\n\nBoth transports provide identical access to the full intelligence layer. The Conduit SDK supports both.\n"
}