Documentation Index
Fetch the complete documentation index at: https://docs.snorbe.deskrex.ai/llms.txt
Use this file to discover all available pages before exploring further.
Agent Run Workflows
The agent analyzes user input and automatically selects tools for execution. Some tools (plan / report / matrix) include Human-in-the-Loop (HITL) checkpoints that require human confirmation before proceeding.
This page describes all execution patterns, their event flows, and the API operations needed at each state.
Common Flow
All executions start with the same pattern:
POST /api/v1/agent/run/stream
config → delta(multiple) → [tool branching] → complete
| SSE Event | When |
|---|
config | Once at stream start |
delta | Each time the agent generates text |
step | On each execution step completion |
complete | On execution finish (last event in stream) |
The agent analyzes input via chat-routing and automatically selects from these tools:
| Tool | Description | HITL |
|---|
| Direct answer | Responds with text without tools | No |
search | Web search (SERP + scraping + summarization) | No |
x_search | X (Twitter) post search | No |
browse | Browser automation for research | Conditional |
source_summary | URL scraping / file summarization | No |
recall | Search past research and memory | No |
plan | Multi-step research plan | Yes |
report | Report / long-form document generation | Yes |
matrix | Structured comparison matrix creation | Yes |
skill | Code execution in sandbox | No |
extract_related_urls | Discover links within a website | No |
Pattern 1: Direct Answer
The agent responds with text without using any tools.
config → delta → step → complete
API operation: None needed
Pattern 2: search (Web Search)
config
→ delta (tool selection)
→ step (tool-calls)
→ search-query-generation-start
→ search-query-generated (multiple queries)
→ search-results → search-scraping
→ search-summary-start → search-summary-delta → search-summary-complete
→ delta (answer based on search results)
→ step → complete
API operation: None (fully automatic)
Pattern 3: browse (Browser Automation)
config
→ browse-start (includes VNC connection info)
→ browse-step (multiple: screenshots, actions)
→ browse-final (result)
→ browse-end
→ delta → complete
API operation: None. Use maxBrowsingSteps parameter to control the step limit.
Browse is usually fully automatic, but it can ask a human for help on login pages, cookie prompts, or pages where a decision is needed. This is different from the plan/report/matrix HITL flow.
browse-start (store websocketInfo.session_id)
→ browse-step
→ browse-ask-human (includes question)
→ ★ browser automation waits for an answer ★
Do not use /agent/run/{runId}/plan/answer or the other run HITL endpoints for this state. Use the websocketInfo.session_id from the browse-start event and answer through the browser control API.
curl -X POST "https://app.snorbe.deskrex.ai/api/v1/browser/answer-question" \
-H "Authorization: Bearer snorbe_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"runId": "cmo...",
"sessionId": "browser-session-id",
"input": "Open the Documents menu on the left"
}'
Use /browser/answer-question-with-files when the answer needs files.
{
"runId": "cmo...",
"sessionId": "browser-session-id",
"modelName": "gpt-5-mini-2025-08-07",
"input": "Use this PDF as context for the answer",
"fileUrls": ["https://example.com/file.pdf"]
}
If the browser is active but not waiting for a question, use /browser/spontaneous-input to steer it. Examples: “Open the pricing page next” or “Do not submit that form.”
GET /agent/run/{runId}/status can show browseState.askHumanQuestion, so you can detect that the browser is waiting. The answer still requires sessionId; API clients should store browse-start.payload.websocketInfo.session_id when it appears in the SSE stream.
Pattern 4: skill (Sandbox Execution)
config
→ skill-session-start
→ skill-delta (multiple: stdout streaming)
→ skill-complete (includes outputFiles)
→ delta → complete
API operation: None (fully automatic)
Pattern 5: plan (Research Plan) — HITL
config
→ delta
→ first_plan (includes goal, steps, question)
→ ★ stream ends ★
Step 1: Check status
GET /api/v1/agent/run/{runId}/status
{
"status": "pending",
"pendingPlanDraft": true
}
Step 2: Confirmation action (choose one)
Answer questions to revise the draft:
POST /api/v1/agent/run/{runId}/plan/answer
{
"runId": "cmo...",
"answer": "Also research X please",
"modelName": "gpt-5-mini-2025-08-07"
}
→ A regenerated_plan event is returned, and the draft is pending review again.
Confirm the plan:
POST /api/v1/agent/run/{runId}/plan/confirm
→ plan_confirmed is returned.
Skip questions and confirm:
POST /api/v1/agent/run/{runId}/plan/skip
Step 3: Resume execution
POST /api/v1/agent/run/stream/{runId}
{ "modelName": "gpt-5-mini-2025-08-07" }
→ Research begins with search/browse/etc. tool events streaming.
Pattern 6: report (Report Generation) — HITL
config
→ delta
→ first_report_structure (includes title, sections, question)
→ ★ stream ends ★
Step 1: Check status
GET /api/v1/agent/run/{runId}/status
{
"status": "pending",
"pendingReportDraft": true
}
Step 2: Confirmation action
Answer questions:
POST /api/v1/agent/run/{runId}/report/answer
{
"runId": "cmo...",
"answer": "Add a section about X",
"modelName": "gpt-5-mini-2025-08-07"
}
Confirm:
POST /api/v1/agent/run/{runId}/report/confirm
Step 3: Resume → Automatic section generation
POST /api/v1/agent/run/stream/{runId}
→ report_section_start
→ report_section_delta (multiple) → report_section_complete
→ report_section_delta → report_section_complete (repeated per section)
→ report_complete (full text)
→ complete
Pattern 7: matrix (Matrix Generation) — HITL
config
→ delta
→ first_matrix_structure (includes title, columns, question)
→ ★ stream ends ★
Step 1: Check status
GET /api/v1/agent/run/{runId}/status
{
"status": "pending",
"pendingMatrixDraft": true
}
Step 2: Confirmation action
Answer questions:
POST /api/v1/agent/run/{runId}/matrix/answer
{
"runId": "cmo...",
"answer": "Add a price column",
"modelName": "gpt-5-mini-2025-08-07"
}
Confirm:
POST /api/v1/agent/run/{runId}/matrix/confirm
POST /api/v1/agent/run/stream/{runId}
→ matrix-structure-draft-delta → matrix-structure-draft-complete
→ matrix-data-preview (real-time)
→ matrix-data-completed
→ complete
Compound Pattern: plan → report
A single execution may trigger multiple HITL checkpoints:
first_plan → [confirm] → resume
→ [search/browse for research]
→ first_report_structure → ★ paused ★
→ [confirm] → resume
→ report_section_delta(multiple) → report_complete → complete
In this case, follow the resume → check pendingReportDraft → confirm → resume loop.
Status Quick Reference
getStatus response | Meaning | Next API operation |
|---|
pendingPlanDraft: true | Plan review pending | plan/answer or plan/confirm or plan/skip → resume |
pendingReportDraft: true | Report structure review pending | report/answer or report/confirm → resume |
pendingMatrixDraft: true | Matrix structure review pending | matrix/answer or matrix/confirm → resume |
browseState.askHumanQuestion present | Browser automation is waiting for an answer | browser/answer-question. Do not call /agent/run/stream/{runId} |
browseState.isBrowsing: true | Browser automation is active | Use browser/spontaneous-input only if you need to steer it; otherwise wait |
skillState.pendingSecretKeys present | Skill is waiting for secrets | Register missing keys through /secret. Usually no resume call is needed |
All false + status: "completed" | Done | None |
All false + status: "running" | In progress | Poll and wait |
Skill secret requests
When a skill needs external API keys or other secrets, the SSE stream emits skill-ask-secret, and GET /agent/run/{runId}/status exposes the missing keys in skillState.pendingSecretKeys.
{
"skillState": {
"isRunningSkill": true,
"skillName": "patent-search",
"pendingSecretKeys": ["PATENT_API_KEY"]
}
}
Register each missing key through /secret.
curl -X POST "https://app.snorbe.deskrex.ai/api/v1/secret" \
-H "Authorization: Bearer snorbe_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"key": "PATENT_API_KEY",
"value": "your-secret-value"
}'
Secret registration notifies the waiting skill, so you usually do not call /agent/run/stream/{runId} again. Keep reading the same SSE stream, or poll GET /agent/run/{runId}/status until pendingSecretKeys is empty.
Answering plan / report / matrix drafts
pendingPlanDraft, pendingReportDraft, and pendingMatrixDraft all mean that a draft is waiting for review. If you want changes, send feedback to /answer and review the regenerated draft. When the draft is acceptable, call /confirm, then resume execution with /agent/run/stream/{runId}.
| State | Revise / answer | Confirm | Resume |
|---|
pendingPlanDraft: true | POST /agent/run/{runId}/plan/answer | POST /agent/run/{runId}/plan/confirm or POST /agent/run/{runId}/plan/skip | POST /agent/run/stream/{runId} |
pendingReportDraft: true | POST /agent/run/{runId}/report/answer | POST /agent/run/{runId}/report/confirm | POST /agent/run/stream/{runId} |
pendingMatrixDraft: true | POST /agent/run/{runId}/matrix/answer | POST /agent/run/{runId}/matrix/confirm | POST /agent/run/stream/{runId} |
/answer request body
Plan, Report, and Matrix /answer endpoints use the same body shape.
{
"runId": "cmo...",
"answer": "Describe what to add, remove, narrow, or revise in the draft",
"modelName": "gpt-5-mini-2025-08-07",
"fileUrls": ["https://example.com/reference.pdf"]
}
| Field | Required | Description |
|---|
runId | Yes | Target AgentRun ID. Same value as {runId} in the path |
answer | Yes | Feedback or revision instruction for the draft |
modelName | Yes | Model used to regenerate the draft |
fileUrls | No | File URLs to use as additional context |
Example:
curl -X POST "https://app.snorbe.deskrex.ai/api/v1/agent/run/{runId}/matrix/answer" \
-H "Authorization: Bearer snorbe_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"runId": "{runId}",
"answer": "Add columns for pricing, customer examples, and primary use cases",
"modelName": "gpt-5-mini-2025-08-07"
}'
/confirm and /skip request body
Confirmation endpoints only need runId.
plan/skip means “confirm the plan without additional feedback.” Report and Matrix do not have skip endpoints.
Basic API Loop
1. POST /agent/run/stream → Receive SSE events
2. On browse-start, store session_id
3. On browse-ask-human, answer with /browser/answer-question
4. Stream ends on complete event
5. GET /agent/run/{runId}/status → Check pending*Draft, browseState, and skillState
6. If plan/report/matrix is pending → answer or confirm → POST /agent/run/stream/{runId} to resume
7. If skillState.pendingSecretKeys exists → register missing keys through /secret
8. Repeat 4-7 until complete
plan, report, and matrix pause the whole agent run, so you resume with /agent/run/stream/{runId} after confirming. A browse question pauses the browser session, so answer with /browser/answer-question and let the same run continue.
Implementation Examples
Below are complete client implementations showing how to combine the API endpoints for the full agent lifecycle.
Simple Execution (No HITL)
For cases without HITL — direct answers, search, browse, or skill tools.
import requests
import json
API_KEY = "snorbe_your_api_key_here"
BASE = "https://app.snorbe.deskrex.ai/api/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Accept": "text/event-stream",
"Content-Type": "application/json",
}
def run_agent(prompt: str) -> str:
"""Execute agent and return final text (no HITL)."""
resp = requests.post(
f"{BASE}/agent/run/stream",
headers=HEADERS,
json={
"modelName": "gpt-5-mini-2025-08-07",
"promptKey": "chat-routing",
"inputText": prompt,
"locale": "en",
},
stream=True,
timeout=300,
)
result = {}
for line in resp.iter_lines(decode_unicode=True):
if not line or not line.startswith("data: "):
continue
event = json.loads(line[6:])
if event["type"] == "delta":
print(event["payload"]["deltaText"], end="", flush=True)
elif event["type"] == "complete":
result = event["payload"]
print()
return result.get("text", "")
# Usage
text = run_agent("What are the top 3 AI news this week?")
Full HITL Loop
Handles plan → report → matrix with multiple HITL confirmations.
import requests
import json
import time
API_KEY = "snorbe_your_api_key_here"
BASE = "https://app.snorbe.deskrex.ai/api/v1"
HEADERS = {
"Authorization": f"Bearer {API_KEY}",
"Accept": "text/event-stream",
"Content-Type": "application/json",
}
def stream_until_complete(run_url: str, body: dict) -> dict:
"""Read SSE stream and return the complete event payload."""
resp = requests.post(run_url, headers=HEADERS, json=body, stream=True, timeout=600)
result = {}
for line in resp.iter_lines(decode_unicode=True):
if not line or not line.startswith("data: "):
continue
event = json.loads(line[6:])
if event["type"] == "delta":
print(event["payload"].get("deltaText", ""), end="", flush=True)
elif event["type"] == "first-plan":
print(f"\n[Plan] {event['payload']['plan']['goal'][:80]}...")
elif event["type"] == "first_report_structure":
print(f"\n[Report] {event['payload']['title']}")
elif event["type"] == "first_matrix_structure":
print(f"\n[Matrix] {event['payload']['title']}")
elif event["type"] == "complete":
result = event["payload"]
print()
elif event["type"] == "error":
raise RuntimeError(event["payload"]["message"])
return result
def get_status(run_id: str) -> dict:
"""Get execution status."""
resp = requests.get(
f"{BASE}/agent/run/{run_id}/status",
headers={"Authorization": f"Bearer {API_KEY}"},
)
return resp.json()
def confirm_draft(run_id: str, draft_type: str) -> None:
"""Skip questions and confirm HITL draft."""
resp = requests.post(
f"{BASE}/agent/run/{run_id}/{draft_type}/skip",
headers={"Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json"},
json={"runId": run_id},
)
data = resp.json()
if data.get("status") != "confirmed":
raise RuntimeError(f"Confirm failed: {data}")
def run_agent_with_hitl(prompt: str) -> str:
"""Full agent execution with HITL handling."""
# Step 1: Initial execution
result = stream_until_complete(
f"{BASE}/agent/run/stream",
{
"modelName": "gpt-5-mini-2025-08-07",
"promptKey": "chat-routing",
"inputText": prompt,
"locale": "en",
},
)
run_id = result.get("runId")
if not run_id:
return result.get("text", "")
# Step 2: HITL loop
for _ in range(10):
status = get_status(run_id)
if status["status"] == "completed":
break
hitl_type = None
if status.get("pendingPlanDraft"):
hitl_type = "plan"
elif status.get("pendingReportDraft"):
hitl_type = "report"
elif status.get("pendingMatrixDraft"):
hitl_type = "matrix"
if not hitl_type:
time.sleep(5)
continue
# Step 3: Confirm → Resume
print(f"\n[HITL] {hitl_type} confirmation required, auto-confirming...")
confirm_draft(run_id, hitl_type)
result = stream_until_complete(
f"{BASE}/agent/run/stream/{run_id}",
{"modelName": "gpt-5-mini-2025-08-07"},
)
return result.get("text", "")
# Usage: plan → research → report, fully automated
text = run_agent_with_hitl(
"Research the latest AI agent trends and create a report"
)
print(f"\nFinal result: {text[:200]}...")
Human-in-the-Loop Confirmation
Instead of auto-confirming, show questions to a human for decision.
def confirm_draft_with_human(run_id: str, draft_type: str) -> None:
"""Show question and let human decide: answer or confirm."""
answer = input(f"\n[{draft_type}] Answer (Enter to skip): ").strip()
if answer:
resp = requests.post(
f"{BASE}/agent/run/{run_id}/{draft_type}/answer",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"runId": run_id,
"answer": answer,
"modelName": "gpt-5-mini-2025-08-07",
},
)
print(f"Answer sent → {resp.json().get('status')}")
else:
confirm_draft(run_id, draft_type)
Non-Streaming Polling Approach
Use non-streaming API with polling instead of SSE.
def run_agent_polling(prompt: str) -> str:
"""Non-streaming execution with polling."""
# Step 1: Start execution
resp = requests.post(
f"{BASE}/agent/run",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={
"modelName": "gpt-5-mini-2025-08-07",
"promptKey": "chat-routing",
"inputText": prompt,
"locale": "en",
},
timeout=300,
)
data = resp.json()
run_id = data["runId"]
# Step 2: Poll for completion or HITL
while True:
status = get_status(run_id)
if status["status"] == "completed":
return data.get("text", "")
for draft_type in ["plan", "report", "matrix"]:
if status.get(f"pending{draft_type.capitalize()}Draft"):
print(f"[HITL] {draft_type} pending, auto-skipping...")
confirm_draft(run_id, draft_type)
resp = requests.post(
f"{BASE}/agent/run/{run_id}/resume",
headers={
"Authorization": f"Bearer {API_KEY}",
"Content-Type": "application/json",
},
json={"runId": run_id, "modelName": "gpt-5-mini-2025-08-07"},
timeout=300,
)
data = resp.json()
break
else:
time.sleep(5)
TypeScript Example
async function* consumeSSE(
url: string,
body: Record<string, unknown>,
apiKey: string,
): AsyncGenerator<{ type: string; payload: Record<string, unknown> }> {
const resp = await fetch(url, {
method: "POST",
headers: {
Authorization: `Bearer ${apiKey}`,
Accept: "text/event-stream",
"Content-Type": "application/json",
},
body: JSON.stringify(body),
});
const reader = resp.body!.getReader();
const decoder = new TextDecoder();
let buffer = "";
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
const parts = buffer.split("\n\n");
buffer = parts.pop() ?? "";
for (const part of parts) {
for (const line of part.split("\n")) {
if (line.startsWith("data: ")) {
yield JSON.parse(line.slice(6));
}
}
}
}
}
async function runAgent(prompt: string, apiKey: string): Promise<string> {
const BASE = "https://app.snorbe.deskrex.ai/api/v1";
let result: Record<string, unknown> = {};
// Step 1: Initial streaming
let runId = "";
for await (const event of consumeSSE(
`${BASE}/agent/run/stream`,
{ modelName: "gpt-5-mini-2025-08-07", promptKey: "chat-routing", inputText: prompt, locale: "en" },
apiKey,
)) {
if (event.type === "delta") process.stdout.write(event.payload.deltaText ?? "");
else if (event.type === "complete") { result = event.payload; runId = event.payload.runId as string; }
}
// Step 2: HITL loop
for (let i = 0; i < 10; i++) {
const statusResp = await fetch(`${BASE}/agent/run/${runId}/status`, {
headers: { Authorization: `Bearer ${apiKey}` },
});
const status = await statusResp.json();
if (status.status === "completed") break;
const draftType = status.pendingPlanDraft ? "plan"
: status.pendingReportDraft ? "report"
: status.pendingMatrixDraft ? "matrix" : null;
if (!draftType) { await new Promise(r => setTimeout(r, 5000)); continue; }
// Confirm → Resume
await fetch(`${BASE}/agent/run/${runId}/${draftType}/skip`, {
method: "POST",
headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json" },
body: JSON.stringify({ runId }),
});
for await (const event of consumeSSE(
`${BASE}/agent/run/stream/${runId}`,
{ modelName: "gpt-5-mini-2025-08-07" },
apiKey,
)) {
if (event.type === "delta") process.stdout.write(event.payload.deltaText ?? "");
else if (event.type === "complete") result = event.payload;
}
}
console.log();
return (result.text as string) ?? "";
}