API Reference

HTTP endpoints, WebSocket connections, authentication, and the audio pipeline.

Base URL

Default: http://127.0.0.1:9999. The server binds to 0.0.0.0:9999 by default.

Authentication

Set FACE_API_KEY env var to require auth. Clients authenticate via:

Authorization: Bearer <key>
// or query param
/api/state?token=<key>

HTTP Endpoints

GET /

Face viewer — full-screen animated face, auto-connects to WebSocket.

GET /dashboard

4-panel dashboard with chat, activity feed, timeline, and controls.

GET /health

Health check. Returns { "status": "ok" }.

POST /api/state

Push a state update. Body is a JSON state message (see Protocol Reference).

curl -X POST http://127.0.0.1:9999/api/state \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'
GET /api/state

Read current state. Returns the latest merged state object.

POST /api/speak

Atomic: sets state to speaking and returns a new audio sequence number.

curl -X POST http://127.0.0.1:9999/api/speak
// Response: { "seq": 1 }
POST /api/audio

Push audio chunks for lip sync. Accepts raw binary WAV (audio/wav) or JSON base64 (application/json). Broadcast to all viewers via WebSocket.

POST /api/audio-done

Signal end of audio sequence. Viewers flush remaining audio buffer.

POST /api/chat

Proxy chat message to OpenClaw gateway. Requires OPENCLAW_GATEWAY_URL env var.

curl -X POST http://127.0.0.1:9999/api/chat \
  -H "Content-Type: application/json" \
  -d '{"message":"What needs to change?"}'
GET /api/history

Returns recent state history (ring buffer, up to 200 entries). Useful for late-joining viewers to catch up.

WebSocket

PathDirectionPurpose
/ws/viewerServer → ClientReceive state updates, audio, text. For face displays.
/ws/agentClient → ServerPush state updates. For AI agents and controllers.

Messages are JSON state objects. Viewers also receive type: "audio" (base64 WAV), type: "audio-seq" (new sequence), type: "audio-done", and type: "history" (replay on connect).

Audio Pipeline

Single authority model — no race conditions:

AuthorityOwnsTransport
Plugin / AgentState transitions/api/state + /api/speak
TTS ServerAudio delivery/api/audio + /api/audio-done
Viewer (browser)Amplitude extractionWeb Audio API AnalyserNode → RMS

The viewer computes mouth amplitude from actual waveform data, so lip sync follows real audio rather than guessed network values.

oface.io Product API

All local server endpoints also work per-face on oface.io. Each claimed username gets its own isolated Durable Object with persistent state and WebSocket connections.

Claim a Face

POST https://oface.io/api/claim

Claim a username and get an API key. Returns the face URL and WebSocket endpoint.

curl -X POST https://oface.io/api/claim \
  -H "Content-Type: application/json" \
  -d '{"username":"alice","face":"zen"}'

# Response:
{
  "ok": true,
  "apiKey": "oface_ak_xxxxxxxxxxxx",
  "url": "https://oface.io/alice",
  "wsViewer": "wss://oface.io/alice/ws/viewer",
  "wsAgent": "wss://oface.io/alice/ws/agent"
}

Check Availability

GET https://oface.io/api/check/:username

Check if a username is available before claiming.

curl https://oface.io/api/check/alice
# { "available": true }

curl https://oface.io/api/check/admin
# { "available": false, "reason": "reserved" }

Per-Face State

POST https://oface.io/:username/api/state

Push state to a claimed face. Requires the face's API key.

curl -X POST https://oface.io/alice/api/state \
  -H "Authorization: Bearer oface_ak_xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{"state":"speaking","emotion":"happy","text":"Hello!"}'

Face Config

GET https://oface.io/:username/api/config

Read persistent face configuration (pack, head, body settings).

PUT https://oface.io/:username/api/config

Update face configuration. Requires the face's API key.

curl -X PUT https://oface.io/alice/api/config \
  -H "Authorization: Bearer oface_ak_xxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{"face":"cyberpunk","head":{"enabled":true}}'

Per-Face WebSocket

PathAuthPurpose
wss://oface.io/:username/ws/viewerNone (public)Watch a face in real time
wss://oface.io/:username/ws/agentAPI key requiredPush state updates from an agent

Per-Face Audio

The same audio pipeline works per-face on oface.io:

EndpointDescription
POST /:username/api/speakStart speaking sequence
POST /:username/api/audioPush audio chunks
POST /:username/api/audio-doneEnd audio sequence

Per-Face Viewer and Dashboard

URLDescription
https://oface.io/:usernameFull-screen face viewer
https://oface.io/:username/dashboardDashboard with controls (redirects with server param)
https://oface.io/unclaimed-name"Available" landing page for unclaimed usernames

Rate Limiting

Token bucket per IP. Default: 60 messages/second per agent. Configurable via environment.

MCP Tools

8 tools for Claude-compatible clients:

ToolDescription
set_face_stateSet state + emotion + text
set_face_lookSet gaze direction
face_speakStart speaking with text
face_winkWink left or right eye
set_face_progressSet progress bar for working/loading
face_emoteCompound emotion with intensity
get_face_stateRead current face state
face_resetReset to idle/neutral
FACE_URL=http://127.0.0.1:9999 bun packages/mcp/src/index.ts

Environment Variables

VariableDefaultDescription
PORT9999Server port
FACE_API_KEYAPI key for auth (optional)
OPENCLAW_GATEWAY_URLOpenClaw gateway for /api/chat proxy
OPENCLAW_GATEWAY_TOKENGateway auth token
OPENCLAW_SESSION_KEYagent:mainSession key for gateway
FACE_URLServer URL for MCP bridge