16 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Infer-first CLI for provider-backed model, image, audio, TTS, video, web, and embedding workflows |
|
Inference CLI |
openclaw infer is the canonical headless surface for provider-backed inference workflows.
It intentionally exposes capability families, not raw gateway RPC names and not raw agent tool ids.
Turn infer into a skill
Copy and paste this to an agent:
Read https://docs.openclaw.ai/cli/infer, then create a skill that routes my common workflows to `openclaw infer`.
Focus on model runs, image generation, video generation, audio transcription, TTS, web search, and embeddings.
A good infer-based skill should:
- map common user intents to the correct infer subcommand
- include a few canonical infer examples for the workflows it covers
- prefer
openclaw infer ...in examples and suggestions - avoid re-documenting the entire infer surface inside the skill body
Typical infer-focused skill coverage:
openclaw infer model runopenclaw infer image generateopenclaw infer audio transcribeopenclaw infer tts convertopenclaw infer web searchopenclaw infer embedding create
Why use infer
openclaw infer provides one consistent CLI for provider-backed inference tasks inside OpenClaw.
Benefits:
- Use the providers and models already configured in OpenClaw instead of wiring up one-off wrappers for each backend.
- Keep model, image, audio transcription, TTS, video, web, and embedding workflows under one command tree.
- Use a stable
--jsonoutput shape for scripts, automation, and agent-driven workflows. - Prefer a first-party OpenClaw surface when the task is fundamentally "run inference."
- Use the normal local path without requiring the gateway for most infer commands.
For end-to-end provider checks, prefer openclaw infer ... once lower-level
provider tests are green. It exercises the shipped CLI, config loading,
default-agent resolution, bundled plugin activation, runtime-dependency repair,
and the shared capability runtime before the provider request is made.
Command tree
openclaw infer
list
inspect
model
run
list
inspect
providers
auth login
auth logout
auth status
image
generate
edit
describe
describe-many
providers
audio
transcribe
providers
tts
convert
voices
providers
status
enable
disable
set-provider
video
generate
describe
providers
web
search
fetch
providers
embedding
create
providers
Common tasks
This table maps common inference tasks to the corresponding infer command.
| Task | Command | Notes |
|---|---|---|
| Run a text/model prompt | openclaw infer model run --prompt "..." --json |
Uses the normal local path by default |
| Run a model prompt on images | openclaw infer model run --prompt "Describe this" --file ./image.png --model provider/model |
Repeat --file for multiple image inputs |
| Generate an image | openclaw infer image generate --prompt "..." --json |
Use image edit when starting from an existing file |
| Describe an image file | openclaw infer image describe --file ./image.png --prompt "..." --json |
--model must be an image-capable <provider/model> |
| Transcribe audio | openclaw infer audio transcribe --file ./memo.m4a --json |
--model must be <provider/model> |
| Synthesize speech | openclaw infer tts convert --text "..." --output ./speech.mp3 --json |
tts status is gateway-oriented |
| Generate a video | openclaw infer video generate --prompt "..." --json |
Supports provider hints such as --resolution |
| Describe a video file | openclaw infer video describe --file ./clip.mp4 --json |
--model must be <provider/model> |
| Search the web | openclaw infer web search --query "..." --json |
|
| Fetch a web page | openclaw infer web fetch --url https://example.com --json |
|
| Create embeddings | openclaw infer embedding create --text "..." --json |
Behavior
openclaw infer ...is the primary CLI surface for these workflows.- Use
--jsonwhen the output will be consumed by another command or script. - Use
--provideror--model provider/modelwhen a specific backend is required. - For
image describe,audio transcribe, andvideo describe,--modelmust use the form<provider/model>. - For
image describe, an explicit--modelruns that provider/model directly. The model must be image-capable in the model catalog or provider config.codex/<model>runs a bounded Codex app-server image-understanding turn;openai-codex/<model>uses the OpenAI Codex OAuth provider path. - Stateless execution commands default to local.
- Gateway-managed state commands default to gateway.
- The normal local path does not require the gateway to be running.
- Local
model runis a lean one-shot provider completion. It resolves the configured agent model and auth, but does not start a chat-agent turn, load tools, or open bundled MCP servers. model run --fileaccepts image files, detects their MIME type, and sends them with the supplied prompt to the selected model. Repeat--filefor multiple images.model run --filerejects non-image inputs. Useinfer audio transcribefor audio files andinfer video describefor video files.model run --gatewayexercises Gateway routing, saved auth, provider selection, and the embedded runtime, but still runs as a raw model probe: it sends the supplied prompt and any image attachments without prior session transcript, bootstrap/AGENTS context, context-engine assembly, tools, or bundled MCP servers.model run --gateway --model <provider/model>requires a trusted operator gateway credential because the request asks the Gateway to run a one-off provider/model override.
Model
Use model for provider-backed text inference and model/provider inspection.
openclaw infer model run --prompt "Reply with exactly: smoke-ok" --json
openclaw infer model run --prompt "Summarize this changelog entry" --model openai/gpt-5.4 --json
openclaw infer model run --prompt "Describe this image in one sentence" --file ./photo.jpg --model google/gemini-2.5-flash --json
openclaw infer model providers --json
openclaw infer model inspect --name gpt-5.5 --json
Use full <provider/model> refs to smoke-test a specific provider without
starting the Gateway or loading the full agent tool surface:
openclaw infer model run --local --model anthropic/claude-sonnet-4-6 --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model cerebras/zai-glm-4.7 --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model google/gemini-2.5-flash --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model groq/llama-3.1-8b-instant --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model mistral/mistral-small-latest --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model openai/gpt-4.1 --prompt "Reply with exactly: pong" --json
openclaw infer model run --local --model ollama/qwen2.5vl:7b --prompt "Describe this image." --file ./photo.jpg --json
Notes:
- Local
model runis the narrowest CLI smoke for provider/model/auth health because it sends only the supplied prompt to the selected model. - Local
model run --filekeeps that lean path and attaches image content directly to the single user message. Common image files such as PNG, JPEG, and WebP work when their MIME type is detected asimage/*; unsupported or unrecognized files fail before the provider is called. model run --fileis best when you want to test the selected multimodal text model directly. Useinfer image describewhen you want OpenClaw's image-understanding provider selection and default image-model routing.- The selected model must support image input; text-only models may reject the request at the provider layer.
model run --promptmust contain non-whitespace text; empty prompts are rejected before local providers or the Gateway are called.- Local
model runexits non-zero when the provider returns no text output, so unreachable local providers and empty completions do not look like successful probes. - Use
model run --gatewaywhen you need to test Gateway routing, agent-runtime setup, or Gateway-managed provider state while keeping the model input raw. Useopenclaw agentor chat surfaces when you want the full agent context, tools, memory, and session transcript. model auth login,model auth logout, andmodel auth statusmanage saved provider auth state.
Image
Use image for generation, edit, and description.
openclaw infer image generate --prompt "friendly lobster illustration" --json
openclaw infer image generate --prompt "cinematic product photo of headphones" --json
openclaw infer image generate --model openai/gpt-image-1.5 --output-format png --background transparent --prompt "simple red circle sticker on a transparent background" --json
openclaw infer image generate --prompt "slow image backend" --timeout-ms 180000 --json
openclaw infer image edit --file ./logo.png --model openai/gpt-image-1.5 --output-format png --background transparent --prompt "keep the logo, remove the background" --json
openclaw infer image edit --file ./poster.png --prompt "make this a vertical story ad" --size 2160x3840 --aspect-ratio 9:16 --resolution 4K --json
openclaw infer image describe --file ./photo.jpg --json
openclaw infer image describe --file ./receipt.jpg --prompt "Extract the merchant, date, and total" --json
openclaw infer image describe-many --file ./before.png --file ./after.png --prompt "Compare the screenshots and list visible UI changes" --json
openclaw infer image describe --file ./ui-screenshot.png --model openai/gpt-4.1-mini --json
openclaw infer image describe --file ./photo.jpg --model ollama/qwen2.5vl:7b --prompt "Describe the image in one sentence" --timeout-ms 300000 --json
Notes:
-
Use
image editwhen starting from existing input files. -
Use
--size,--aspect-ratio, or--resolutionwithimage editfor providers/models that support geometry hints on reference-image edits. -
Use
--output-format png --background transparentwith--model openai/gpt-image-1.5for transparent-background OpenAI PNG output;--openai-backgroundremains available as an OpenAI-specific alias. Providers that do not declare background support report the hint as an ignored override. -
Use
image providers --jsonto verify which bundled image providers are discoverable, configured, selected, and which generation/edit capabilities each provider exposes. -
Use
image generate --model <provider/model> --jsonas the narrowest live CLI smoke for image generation changes. Example:openclaw infer image providers --json openclaw infer image generate \ --model google/gemini-3.1-flash-image-preview \ --prompt "Minimal flat test image: one blue square on a white background, no text." \ --output ./openclaw-infer-image-smoke.png \ --jsonThe JSON response reports
ok,provider,model,attempts, and written output paths. When--outputis set, the final extension may follow the provider's returned MIME type. -
For
image describeandimage describe-many, use--promptto give the vision model a task-specific instruction such as OCR, comparison, UI inspection, or concise captioning. -
Use
--timeout-mswith slow local vision models or cold Ollama starts. -
For
image describe,--modelmust be an image-capable<provider/model>. -
For local Ollama vision models, pull the model first and set
OLLAMA_API_KEYto any placeholder value, for exampleollama-local. See Ollama.
Audio
Use audio for file transcription.
openclaw infer audio transcribe --file ./memo.m4a --json
openclaw infer audio transcribe --file ./team-sync.m4a --language en --prompt "Focus on names and action items" --json
openclaw infer audio transcribe --file ./memo.m4a --model openai/whisper-1 --json
Notes:
audio transcribeis for file transcription, not realtime session management.--modelmust be<provider/model>.
TTS
Use tts for speech synthesis and TTS provider state.
openclaw infer tts convert --text "hello from openclaw" --output ./hello.mp3 --json
openclaw infer tts convert --text "Your build is complete" --output ./build-complete.mp3 --json
openclaw infer tts providers --json
openclaw infer tts status --json
Notes:
tts statusdefaults to gateway because it reflects gateway-managed TTS state.- Use
tts providers,tts voices, andtts set-providerto inspect and configure TTS behavior.
Video
Use video for generation and description.
openclaw infer video generate --prompt "cinematic sunset over the ocean" --json
openclaw infer video generate --prompt "slow drone shot over a forest lake" --resolution 768P --duration 6 --json
openclaw infer video describe --file ./clip.mp4 --json
openclaw infer video describe --file ./clip.mp4 --model openai/gpt-4.1-mini --json
Notes:
video generateaccepts--size,--aspect-ratio,--resolution,--duration,--audio,--watermark, and--timeout-msand forwards them to the video-generation runtime.--modelmust be<provider/model>forvideo describe.
Web
Use web for search and fetch workflows.
openclaw infer web search --query "OpenClaw docs" --json
openclaw infer web search --query "OpenClaw infer web providers" --json
openclaw infer web fetch --url https://docs.openclaw.ai/cli/infer --json
openclaw infer web providers --json
Notes:
- Use
web providersto inspect available, configured, and selected providers.
Embedding
Use embedding for vector creation and embedding provider inspection.
openclaw infer embedding create --text "friendly lobster" --json
openclaw infer embedding create --text "customer support ticket: delayed shipment" --model openai/text-embedding-3-large --json
openclaw infer embedding providers --json
JSON output
Infer commands normalize JSON output under a shared envelope:
{
"ok": true,
"capability": "image.generate",
"transport": "local",
"provider": "openai",
"model": "gpt-image-2",
"attempts": [],
"outputs": []
}
Top-level fields are stable:
okcapabilitytransportprovidermodelattemptsoutputserror
For generated media commands, outputs contains files written by OpenClaw. Use
the path, mimeType, size, and any media-specific dimensions in that array
for automation instead of parsing human-readable stdout.
Common pitfalls
# Bad
openclaw infer media image generate --prompt "friendly lobster"
# Good
openclaw infer image generate --prompt "friendly lobster"
# Bad
openclaw infer audio transcribe --file ./memo.m4a --model whisper-1 --json
# Good
openclaw infer audio transcribe --file ./memo.m4a --model openai/whisper-1 --json
Notes
openclaw capability ...is an alias foropenclaw infer ....