Viewing docs for v0.3.1 · view latest docs

API Reference

The daemon exposes an HTTP API over its unix socket. When TCP is enabled, the same API is available over TLS with optional bearer token authentication. All responses are JSON.

Endpoints

Status & Config

MethodPathDescription
GET/api/statusDaemon status and uptime
GET/api/configFull configuration

Live Metrics

MethodPathDescription
GET/api/metrics/cpuCPU per-core metrics
GET/api/metrics/memoryMemory metrics
GET/api/metrics/diskDisk space, I/O, SMART health
GET/api/metrics/networkNetwork per-interface metrics
GET/api/metrics/temperatureTemperature sensor readings
GET/api/metrics/powerPower consumption per zone
GET/api/metrics/processAll processes (live snapshot)
GET/api/metrics/dashboardCombined dashboard data

History

All history endpoints accept ?start=&end= query parameters (Unix seconds). Bucket size auto-scales: 1 min (1h range) to 6 hr (30d range).

MethodPath
GET/api/history/cpu
GET/api/history/memory
GET/api/history/disk
GET/api/history/temperature
GET/api/history/power
GET/api/history/process

Alerts

MethodPathDescription
GET/api/alertsList alerts (?ack=false for unacknowledged)
POST/api/alerts/{id}/ackAcknowledge an alert
GET/api/alert-rulesList all rules
POST/api/alert-rulesCreate a rule
DELETE/api/alert-rules/{id}Delete a rule
PUT/api/alert-rules/{id}/toggleToggle rule enabled/disabled
POST/api/test-notificationsTest all notification channels

Query & Export

MethodPathDescription
POST/api/queryExecute read-only SQL
POST/api/exportExport query results to file

Data Management

MethodPathDescription
POST/api/compactTrigger database compaction
POST/api/snapshotCreate standalone DuckDB snapshot
POST/api/archiveTrigger Parquet archival
POST/api/unarchiveReload Parquet data into DuckDB
GET/api/archive/statusArchive state and directory stats

Preferences

MethodPathDescription
GET/api/preferencesGet all saved preferences
POST/api/preferencesSet a preference (key/value)

Examples

get daemon status
curl --unix-socket /run/bewitch/bewitch.sock \
  http://localhost/api/status
get CPU metrics
curl --unix-socket /run/bewitch/bewitch.sock \
  http://localhost/api/metrics/cpu
get history with time range
curl --unix-socket /run/bewitch/bewitch.sock \
  "http://localhost/api/history/cpu?start=$(date -d '1 hour ago' +%s)&end=$(date +%s)"
create alert rule
curl --unix-socket /run/bewitch/bewitch.sock \
  -H 'Content-Type: application/json' \
  -d '{
    "name": "high-cpu",
    "type": "threshold",
    "severity": "warning",
    "metric": "cpu.aggregate",
    "operator": ">",
    "value": 90,
    "duration": "5m"
  }' \
  http://localhost/api/alert-rules
execute SQL query
curl --unix-socket /run/bewitch/bewitch.sock \
  -H 'Content-Type: application/json' \
  -d '{"sql": "SELECT COUNT(*) as n FROM cpu_metrics"}' \
  http://localhost/api/query
remote access (TCP + TLS + auth)
curl -k -H "Authorization: Bearer my-secret-token" \
  https://myserver:9119/api/status

ETag Caching

Metric and process endpoints include ETag headers (generation counters). Clients can sendIf-None-Match to receive 304 Not Modified when data hasn't changed, avoiding unnecessary serialization and transfer.

Response Format

All responses are JSON. Arrays are wrapped in objects (e.g., {"cores": [...]} not bare [...]). Timestamps are int64 Unix nanoseconds. Errors return{"error": "message"}.