The SuperDoc SDK ships tool definitions that plug directly into OpenAI, Anthropic, Vercel AI, or any custom LLM integration. Pick tools, send them with your prompt, dispatch the model’s tool calls — the SDK handles schema formatting, argument validation, and execution.Documentation Index
Fetch the complete documentation index at: https://superdoc-nick-sd-2070-add-content-controls-namespace-to-doc.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
LLM tools are in alpha. Tool names and schemas may change between releases.
Quick start
Install the SDK, create a client, and wire up an agentic loop.- Node.js
- Python
Tool selection
chooseTools() returns provider-formatted tool definitions ready to pass to your LLM.
- Node.js
- Python
Essential mode (default)
Returns 5 essential tools plusdiscover_tools — a meta-tool that lets the LLM load more groups on demand. This keeps the initial context small while giving the model access to the full toolkit when needed.
The 5 essential tools:
| Tool | What it does |
|---|---|
get_document_text | Returns the full plain-text content of the document |
query_match | Searches by node type, text pattern, or both — returns matches with addresses |
apply_mutations | Batch edit: rewrite, insert, delete text and apply formatting in one call |
get_node_by_id | Get details about a specific node by its address |
undo | Undo the last operation |
groups, those groups are loaded in addition to the essential set:
All mode
Returns every tool from the requested groups (or all groups ifgroups is omitted). The core group is always included.
Dispatching tool calls
dispatchSuperDocTool() resolves a tool name to the correct SDK method, validates arguments, and executes the call.
- Node.js
- Python (sync)
- Python (async)
Tool groups
Tools are organized into 11 groups. In essential mode, the LLM can load any group dynamically viadiscover_tools.
| Group | Description |
|---|---|
core | Read nodes, get text, find/replace, insert, delete, batch mutations |
format | Bold, italic, underline, strikethrough, alignment, spacing, borders, shading |
create | Create headings, paragraphs, tables, sections, table of contents |
tables | Row/column operations, cell merging, table formatting, borders |
sections | Page layout, margins, columns, headers/footers, page numbering |
lists | Bullet and numbered lists, indentation, list type conversion |
comments | Create, edit, delete, resolve, and list comment threads |
trackChanges | List, inspect, accept, and reject tracked changes |
toc | Table of contents — create, configure, refresh |
history | Undo and redo |
session | Open, save, close, and manage document sessions |
The discover_tools pattern
When the LLM needs tools beyond the essential set, it callsdiscover_tools with the groups it wants. Your agentic loop handles this like any other tool call — dispatchSuperDocTool returns the new tool definitions, and you merge them into the next request.
Providers
Each provider gets tool definitions in its native format.- OpenAI
- Anthropic
- Vercel AI
- Generic
Best practices
Start with essential mode
Load only the 5 essential tools plusdiscover_tools. This keeps the context window small and gives the model room to reason. Let it call discover_tools when it needs more — don’t front-load every group.
Minimize tool calls
A typical edit should take 3–5 tool calls: query, mutate, done. Instruct the LLM to plan all edits before calling tools, and to batch multiple changes into a singleapply_mutations call when possible.
Use apply_mutations for text edits
apply_mutations can rewrite, insert, delete, and format text in one call. It supports multiple steps, so the LLM can edit several paragraphs at once. Use it for any operation on existing text.
Feed errors back to the model
dispatchSuperDocTool throws descriptive errors with codes like MATCH_NOT_FOUND or INVALID_ARGUMENT. Pass these back as tool results — most models self-correct on the next turn.
Add tool call examples for repeatable actions
If your workflow involves the same kind of edit across many documents (e.g., always rewriting a specific clause, always adding a comment to a section), include a concrete tool call example in your system prompt. Models that see a working example of the exact tool invocation produce correct calls more reliably than models that only see the schema.Include a system prompt
Tell the model what it can do and how to approach edits. Here’s an example:Utility functions
| Function | Description |
|---|---|
chooseTools(input) | Select tools for a provider, filtered by mode and groups |
dispatchSuperDocTool(client, name, args) | Execute a tool call against a connected client |
listTools(provider) | List all tool definitions for a provider |
resolveToolOperation(toolName) | Map a tool name to its operation ID |
getToolCatalog() | Load the full tool catalog with metadata |
getAvailableGroups() | List all available tool groups |
Related
- MCP Server — connect AI agents via the Model Context Protocol
- Skills — reusable prompt templates for LLM document editing
- SDKs — typed Node.js and Python wrappers
- Document API — the operation set behind the tools
- AI Agents — headless mode for server-side AI workflows

