Overview

APM is Tom's Advanced Process Manager for Linux. One binary, copy and run — no config files required to get started. Complexity is APM's problem, not yours.

APM runs as a background daemon and manages worker processes. You interact with it through the apm CLI. The daemon auto-starts the first time you run any apm command.

Architecture

The daemon communicates with the CLI via an abstract Unix socket. Workers are child processes managed by the daemon. Each worker can have multiple parallel instances. The built-in reverse proxy routes incoming connections across instances using round-robin.

  CLI  ──(unix socket)──  Daemon  ──  Worker [4 instances]
                                  └──  Worker [1 instance]
                                  └──  GUI server (port 6789)

Philosophy

  • Zero config to start — defaults are correct for 90% of cases
  • CLI-first, config files for persistence and power users
  • Every error message answers "what do I do now"
  • Never crash on a bad optional field — warn and continue
  • Linux only. Windows is not supported.

Installation

Run the install script as root. It downloads the right binary for your architecture, sets up the system group and log file, installs the init service, and starts the daemon.

shell
# One-liner install
$ curl -fsSL https://processmanager.dev/install.sh | sudo bash

# Or download first, review, then run
$ curl -fsSL https://processmanager.dev/install.sh -o install.sh
$ sudo bash install.sh

# Verify
$ apm --version

The installer sets up:

  • /usr/sbin/apm — binary
  • /var/log/apm.log — daemon log (group-readable by apm)
  • /etc/apm/apm.conf — default config (created if absent)
  • apm OS group — add users with usermod -aG apm <user>
  • Startup service (systemd, OpenRC, or SysV — auto-detected)

The daemon auto-loads /etc/apm/apm.conf at startup — no separate boot step required.

Run as a systemd service

The installer registers APM with your init system automatically. The service runs the daemon at boot, supervised by systemd.

# Check status
$ systemctl status apm

# View logs
$ journalctl -u apm -f

Uninstall

# Remove binary and config
$ sudo apm uninstall

# Remove everything including logs, group, service
$ sudo apm uninstall --purge

Command reference

All commands communicate with the running daemon. If no daemon is running, APM starts one automatically.

apm [command] [options]

Process commands

CommandDescription
apm run <exec> [args...] [flags]Create and immediately start a worker without a config file. All --flag options from Worker Options apply. Worker name defaults to the executable name; use --name to override.
apm start <name>Start a registered worker that is currently stopped.
apm stop <name>Gracefully stop a worker and all its instances.
apm restart <name>Restart a worker (rolling if rolling was set).
apm update <name> [flags]Update a running worker's config and reload it. Accepts the same flags as run. Add --no-restart to apply the new config without restarting.
apm listList all workers with status, instance count, CPU, memory, and uptime.
apm remove <name>Stop and remove a worker. Alias: apm rm
apm stopallStop all workers without stopping the daemon.

Config commands

CommandDescription
apm bootLoad /etc/apm/apm.conf into the running daemon. Called automatically by startup scripts after the daemon starts. Safe to run manually — skips workers already running.
apm load <file>Load a config file and start all workers defined in it. Workers already running are skipped.
apm reload <file> [--force]Smart reload: diff the config against running workers, start new ones, restart changed ones (all config fields synced live — watcher patterns, TLS, rolling settings, proxy flags, etc.), stop removed ones. --force restarts unchanged workers too.
apm saveconfWrite all workers back to their source config files (the file they were loaded from). Workers started via run without a file prompt for one.
apm saveconf <name> <file>Save a specific worker to a file and set that as its config file going forward.

GUI & Monitor commands

CommandDescription
apm guiStart the web GUI and print its URL. No-op if already running.
apm gui stopStop the GUI server.
apm monitorLive terminal dashboard — shows system CPU, RAM, load average, uptime, and per-worker/instance status, CPU%, memory, and restart counts. Updates every second. Press Ctrl+C to exit.

Daemon commands

CommandDescription
apm exit daemonStop all workers and shut down the daemon.
apm installInstall APM to /usr/sbin/apm with group, log, and service setup. Requires root.
apm uninstall [--purge]Remove APM from the system. --purge removes logs, config, and service files.
apm -v / --versionPrint CLI and daemon version.
apm -h / --help [--full]Show command help. --full includes all commands.

Run flags

Flags for apm run and apm update. The same options are available as config file fields (see Worker options).

# Start a worker from the CLI — no config file needed
$ apm run node server.js \
    --name        myapp \
    --instances   4 \
    --server      http://0.0.0.0:3000 \
    --watch       "*.js" \
    --restart                # restart on clean exit
    --rolling                # rolling restart mode

# Update a running worker's instance count without restarting
$ apm update myapp --instances 8 --no-restart

# Save it back to a conf file
$ apm saveconf myapp /etc/apm/apm.conf.d/myapp.conf

Config file

Config files define workers and daemon settings. They're loaded with apm load or apm reload. The system config path is /etc/apm/apm.conf.

Syntax

  • Key-value pairs end with ;
  • Blocks use { }
  • Comments: # or // to end of line
  • Strings: unquoted, or single/double/backtick quoted (quotes are stripped)
  • Multiple values: comma-separated on one line, or repeat the key
  • The : suffix on keys is optional
  • include <glob>; inlines another file at parse position
apm.conf
# Simple worker
worker {
    name       myapp;
    exec       node;
    params     server.js;
    instances  4;
    restart    true;
    watch      *.js;
    server     http://0.0.0.0:3000;
}

Config hierarchy

APM's startup scripts call apm boot after the daemon starts, which loads /etc/apm/apm.conf. The main config is typically structured as:

  1. /etc/apm/apm.conf — main config (daemon block + includes)
  2. /etc/apm/apm.conf.d/*.conf — drop-in worker configs, sorted by filename

You can also load configs manually at any time with apm load <file> or do a live diff with apm reload <file>.

apm.conf — with includes
daemon {
    gui_port  6789;
    log       /var/log/apm/apm.log;
}

# Load drop-in worker configs
include apm.conf.d/*.conf;

worker {
    name  portal;
    exec  node;
    params app.js;
}
Include paths are relative to the including file's directory. Glob patterns are supported. Circular includes are detected and rejected.

Multiple values

# Comma-separated on one line
server  http://0.0.0.0:3000, ws://0.0.0.0:3001;

# Or repeat the key
ban_path  *.php;
ban_path  *wp-*;
ban_path  *.env;

Worker options

All options are available both as CLI flags to apm start and as fields in a worker { } config block.

Identity

FieldDefaultDescription
nameexecWorker name. Used in all CLI output and log prefixes.
execrequiredExecutable to run (looked up in PATH).
paramsArguments passed to the executable. Multiple values supported.
pathcwdWorking directory for the child process. Env vars expanded.
instances1Number of parallel child processes to run.
userRun child processes as this OS user. Daemon must run as root.

Environment

FieldDefaultDescription
envInject environment variables. Format: KEY=value. Multiple values supported.
env_indexInject the instance index as an env var. Specify the variable name.
env_filePath to a KEY=VALUE file. Read by APM before setuid drop; child inherits the env.

Restart on clean exit

FieldDefaultDescription
restartfalseRestart the process when it exits with code 0.
restart_delay250Milliseconds to wait before restarting after a clean exit.
max_restarts0Maximum clean-exit restarts. 0 = unlimited.

Restart on error exit

FieldDefaultDescription
restart_errtrueRestart the process when it exits with a non-zero code.
err_delay250Milliseconds to wait before restarting after an error exit.
max_err_restarts0Maximum error-exit restarts. 0 = unlimited.
err_graceMilliseconds of uptime required before a restart counts against the limit.

Shutdown

FieldDefaultDescription
kill_timeoutMilliseconds to wait for graceful shutdown (SIGTERM) before sending SIGKILL.

Logging

FieldDefaultDescription
logPath to the stdout log file.
err_logPath to the stderr log file. Defaults to the same file as log.
prefixString prepended to each log line.
log_time_formatTimestamp format for log lines.
strip_ansifalseStrip ANSI escape codes from log output.
syslogForward logs to syslog. Value is the destination (e.g. syslog://localhost:514).
syslog_tagTag for syslog messages.

Proxy / HTTP

FieldDefaultDescription
serverBind address for the proxy server. See Server types.
lowercase_hdrsfalseLowercase all HTTP header names before forwarding to the child.
trust_proxyfalseTrust X-Forwarded-For / X-Real-IP headers for client IP resolution.
keep_aliveHTTP keep-alive timeout in milliseconds.
max_conns0Maximum concurrent connections per server. 0 = unlimited.
session_persistfalsePersist session state across rolling restarts.
session_waitMilliseconds to wait for a new instance to accept a migrated session.

File watcher

FieldDefaultDescription
watchComma-separated glob patterns of file paths to watch. Restarts the worker when any match changes. See File watcher for pattern syntax.
watch_ignoreComma-separated glob patterns of paths to exclude from watching.
watch_delayDebounce delay in milliseconds before triggering a restart.

Rolling restart

FieldDefaultDescription
rollingfalseEnable rolling restart mode (one instance at a time).
rolling_delayMilliseconds between restarting each instance.

Stats

FieldDefaultDescription
stats_intervalInterval in milliseconds between stats collection cycles.

Crash webhook (on_crash)

APM can POST a JSON payload (or send a GET request) to a URL of your choice whenever a child process crashes — i.e. exits with a non-zero code or is killed by a signal. Intentional stops (apm stop) are never reported.

apm.conf
worker {
    name  myapp;
    cmd   node server.js;

    on_crash {
        url         https://hooks.example.com/apm-crash;
        method      POST;           # POST (default) or GET
        debounce    10000;          # min ms between calls (floor: 5000)
        log_lines   20;             # tail lines to include in payload
        log_source  err;            # "err" (default) or "out"
        secret      mysecret;       # signs payload with HMAC-SHA256
    }
}
FieldDefaultDescription
urlDestination URL. Required — the block is ignored without it.
methodPOSTHTTP method. POST sends a JSON body; GET sends no body.
debounce5000Minimum milliseconds between webhook calls per worker. Minimum enforced value is 5000 — prevents flooding during a crash loop.
log_lines0Number of trailing lines from the log file to include in the log field of the payload. 0 = omit.
log_sourceerrWhich log to tail: err (stderr log) or out (stdout log).
secretWhen set, APM signs the raw POST body with HMAC-SHA256 and sends the result in the X-APM-Signature: sha256=… header.
Request headers
HeaderValue
X-APM-WorkerWorker name
X-APM-Eventcrash
X-APM-Signaturesha256=<hex> — only present when secret is set
POST payload
JSON
{
  "worker":         "myapp",
  "instance":       1,
  "exit_code":      1,
  "exit_signal":    "SIGKILL",   // omitted if process exited normally
  "runtime_ms":     4821,
  "error_restarts": 3,
  "timestamp":      "2025-06-01T12:00:00Z",
  "log":            "Error: cannot connect to DB\n..."  // omitted if log_lines = 0
}

Daemon config

Global APM settings live in a top-level daemon { } block. No daemon block = all defaults. Zero config still works.

apm.conf
daemon {
    allow_group    apm-admin;     # OS group allowed to use apm CLI
    default_user   www-data;      # fallback user for workers without user=
    log            /var/log/apm/apm.log;
    gui_port       6789;           # 0 = disabled
    gui_bind       127.0.0.1;
    gui_password   secret;
    gui_rate_limit 20;             # failed auth attempts/min before ban
    telemetry      true;           # hourly anonymous ping (opt-out with false)
}
FieldDefaultDescription
allow_groupOS group whose members can connect to the APM CLI socket. Root and the daemon owner are always allowed.
default_userRun workers as this user when no user= is set and daemon runs as root.
log/var/log/apm.logDaemon log file path.
gui_port6789Port for the web GUI. Set to 0 to disable.
gui_bind127.0.0.1Address to bind the GUI. Warning logged if changed to 0.0.0.0 without a password.
gui_passwordPassword for GUI access. If empty, GUI only binds localhost.
gui_rate_limitMaximum failed auth attempts per minute before IP ban.
telemetrytrueSend anonymous hourly ping to ping.processmanager.dev with worker count, APM version, OS, and hardware class. No names, paths, or IPs are sent. Set to false to opt out.

Server types

APM's built-in proxy accepts connections and forwards them to worker instances via IPC. Specify servers with the server field. Multiple servers per worker are supported.

SchemeDescription
http://HTTP reverse proxy. APM parses request headers and forwards the full request to a child instance.
ws://WebSocket proxy. Handles the upgrade handshake; bidirectional frame forwarding to child.
tcp://Raw TCP proxy. Bytes forwarded as-is. Use for databases, game servers, custom protocols.
config
worker {
    name    api;
    exec    node;
    params  api.js;

    # HTTP and WebSocket on separate ports
    server  http://0.0.0.0:3000;
    server  ws://0.0.0.0:3001;

    # Or combined on one line
    server  http://0.0.0.0:3000, ws://0.0.0.0:3001;
}

Client IP resolution

When behind a CDN or reverse proxy (e.g. nginx), enable trust_proxy so APM resolves the real client IP from X-Forwarded-For headers. This affects Vanguard rate limiting and ban decisions.

trust_proxy  true;

Vanguard

Vanguard is APM's built-in request firewall. It runs before worker IPC — rejected connections never reach your app. Configure it with a vanguard { } sub-block inside a worker.

config
worker {
    name    api;
    exec    node;
    params  api.js;
    server  http://0.0.0.0:3000;

    vanguard {
        rate_limit    100;          # requests/sec per IP
        rate_burst    200;          # burst capacity
        ban_ttl       300000;       # auto-ban for 5 minutes
        ban_path      *.php, *wp-*, *.env, /.git*;
        ban_response  Forbidden;
    }
}

IP filtering

FieldDescription
allow_ipCIDR allowlist. Only matching IPs are allowed. Multiple values supported.
ban_ipCIDR blocklist. Matching IPs are rejected immediately (silent RST for TCP, 403 for HTTP).

Path banning

FieldDescription
ban_pathComma-separated pattern list. Matched against the request path (query string stripped). Same four modes as the file watcher: *.ext ends-with, prefix* starts-with, *word* contains, exact exact match.
ban_responseHTTP response body for blocked requests. Default: Forbidden.
ban_path  *.php;          # ends-with  — block all .php requests
ban_path  *wp-*;          # contains   — block WordPress probes
ban_path  /.git*;         # starts-with — block .git exposure
ban_path  *.env;          # ends-with  — block .env file reads
ban_path  /admin/login;   # exact match — block a specific path

Rate limiting

FieldDescription
rate_limitToken bucket rate in requests per second per real client IP.
rate_burstBurst capacity. Defaults to rate_limit if not set.
ban_ttlMilliseconds to auto-ban an IP after rate limit is exceeded. 0 = soft block (no ban, just drop).

Rate-limited requests receive 429 Too Many Requests. Path/IP bans return 403 Forbidden (or silent TCP RST).

CDN IP lists

APM's installer fetches Cloudflare's published egress IP ranges and writes them to /etc/apm/ips/ as ready-to-include partial configs. Include them inside a vanguard { } block to restrict direct access to CDN traffic only.

vanguard {
    # Only allow Cloudflare egress IPs (IPv4 + IPv6)
    include /etc/apm/ips/cloudflare-v4.part;
    include /etc/apm/ips/cloudflare-v6.part;

    rate_limit  500;
    ban_path    *.php, *wp-*, *.env, /.git*;
}

The IP lists are re-fetched automatically on every apm install or upgrade. To refresh them manually:

$ sudo apm install  # re-runs the full installer, including IP fetch
Tip: Combine allow_ip with CDN IP files to drop all non-CDN connections at the TCP level — before any HTTP parsing happens and before your app sees the request.

TLS

APM has first-class TLS support for all server types — HTTP, WebSocket, and TCP. Bring your own certificates.

config
worker {
    name      api;
    exec      node;
    params    api.js;
    server    https://0.0.0.0:443;

    tls       true;
    tls_cert  /etc/ssl/certs/myapp.crt;
    tls_key   /etc/ssl/private/myapp.key;
    # tls_ca for mutual TLS (client cert verification)
    tls_ca    /etc/ssl/certs/ca.crt;
}
FieldDescription
tlsEnable TLS on all server listeners for this worker.
tls_certPath to the TLS certificate file (PEM).
tls_keyPath to the private key file (PEM).
tls_caPath to the CA certificate for mutual TLS. If set, client certificates are required and verified against this CA.
Testing without nginx: Use TLS directly on APM to test HTTPS/WSS locally. For production, APM + nginx is the typical setup where nginx handles TLS termination.

File watcher

The file watcher monitors your source directory and triggers a worker restart when matching files change. Uses kernel file-watch events (inotify) — no polling.

config
worker {
    name          api;
    exec          node;
    params        server.js;
    path          /home/user/myapp;

    watch         *.js, *.json;        # watch .js and .json files
    watch_ignore  *node_modules*;      # ignore anything inside node_modules
    watch_delay   200;                  # 200ms debounce
}
FieldDescription
watchComma-separated pattern list. Matched against the full path of each changed file. Worker restarts when any pattern matches.
watch_ignoreComma-separated pattern list. Paths matching any of these are excluded from watch events even if they also match watch.
watch_delayDebounce delay in milliseconds. Multiple rapid changes are batched into one restart.

Pattern syntax

Watch patterns use a simple glob-style syntax — no regex needed. Four matching modes:

PatternModeExampleMatches
*.extends-with*.jsAny file ending in .js
prefix*starts-withsrc/*Any path starting with src/
*word*contains*node_modules*Any path containing node_modules
exactexact matchconfig.jsonOnly that exact filename
# Go source files, excluding generated code and vendor
watch         *.go;
watch_ignore  *_generated.go, *vendor*;

# JS/TS project — watch src/, ignore build output and deps
watch         *.js, *.ts, *.json;
watch_ignore  *node_modules*, *dist/*;

# Python — any .py file anywhere under path
watch         *.py;
watch_ignore  *__pycache__*;
Tip: Keep watch_delay at 100–300 ms. Build tools often write multiple files in quick succession; the debounce ensures only one restart fires per save.

Rolling restart

Rolling restarts cycle through instances one at a time, keeping the rest running to serve traffic. Zero downtime for multi-instance workers.

worker {
    instances     4;
    rolling       true;
    rolling_delay 1000;  # 1s between each instance restart
}

With session_persist true, open connections are migrated to a new instance before the old one is killed. Use session_wait to control how long APM waits for the new instance to become ready.

rolling          true;
rolling_delay    500;
session_persist  true;
session_wait     2000;  # wait up to 2s for new instance

Logger

APM has a built-in logger for each worker. Every line written to a child process's stdout or stderr is intercepted, prefixed with a timestamp and worker name, and written to the configured destination. Coloring is applied by APM before writing — use strip_ansi to remove it when logging to files.

Destinations

FieldDefaultDescription
logFile path for stdout. If omitted, output goes to the daemon log.
err_logsame as logFile path for stderr. Defaults to the same file as log when not set.
syslogSyslog destination URL, e.g. syslog://localhost:514. ANSI is always stripped for syslog regardless of strip_ansi.
syslog_tagTag string attached to every syslog message for this worker.

Prefix

Each log line is prefixed with the worker name (or a custom string). The prefix field supports the color syntax described below. APM automatically appends the instance number in multi-instance workers:

FieldDefaultDescription
prefixnameString prepended to every log line. Supports çN- color escapes. The instance index is appended automatically for multi-instance workers.
apm.conf
worker {
    name    api;
    exec    node;
    params  server.js;

    # cyan name, reset after — instance # is appended automatically
    prefix  ç51-api-serverçR-;

    log     /var/log/myapp/out.log;
    err_log /var/log/myapp/err.log;
}

For a worker with instances 3, the stdout prefix becomes api-server#1, api-server#2, api-server#3 — each in a distinct color so instances are visually distinct in the live GUI and in log files.

Timestamp format

The timestamp prepended to each line is controlled by log_time_format. The format string uses strftime-style tokens and supports color escapes. The default is ç214-%Y-%m-%d %Tç59-.%FçR- (orange date, dim fractional seconds).

FieldDefaultDescription
log_time_formatç214-%Y-%m-%d %Tç59-.%FçR-Timestamp format. Supports strftime tokens and color escapes.

Strftime tokens

TokenOutput
%Y4-digit year — 2026
%y2-digit year — 26
%mMonth, zero-padded — 03
%dDay, zero-padded — 07
%HHour 24h, zero-padded — 14
%MMinute, zero-padded — 05
%SSecond, zero-padded — 09
%TShorthand for %H:%M:%S
%FFractional seconds (microseconds)
# Default — orange date, dim microseconds
log_time_format  ç214-%Y-%m-%d %Tç59-.%FçR-;

# Compact — just HH:MM:SS in gray
log_time_format  ç59-%TçR-;

# No color — plain ISO timestamp
log_time_format  %Y-%m-%d %T;

Strip ANSI

FieldDefaultDescription
strip_ansifalseStrip ANSI color codes from all log output before writing to the file. Useful when you want clean logs on disk but colored output in the GUI. Always on for syslog destinations.
Tip: Keep strip_ansi false for local development (colors in the GUI look great), and set it to true in production log files so tools like grep, awk, and log shippers see clean text.

Color syntax — çN-

APM uses a compact color escape based on the 256-color terminal palette. The ç character (U+00E7) acts as the escape marker. This syntax works in prefix, log_time_format, and anywhere APM renders text to the terminal or log files.

SyntaxANSI equivalentDescription
çN-\033[38;5;NmSet foreground to 256-color palette index N (0–255).
çN,BG-\033[38;5;N;48;5;BGmForeground N, background BG.
çN,BG,ATTR-\033[38;5;N;48;5;BG;ATTRmForeground, background, and an SGR attribute (1 bold, 2 dim, 4 underline, 9 strikethrough).
çR-\033[0mReset all formatting.
# Foreground only
prefix  ç82-myappçR-;      # bright green name
prefix  ç196-myappçR-;     # bright red name
prefix  ç214-myappçR-;     # orange name

# Foreground + background
prefix  ç15,88-ERRORçR-;    # white text on dark red background

# Bold foreground
prefix  ç51,0,1-myappçR-;   # bold cyan

Useful color reference

CodeApproximate color
ç1-Dark red
ç2-Dark green
ç6-Cyan
ç51-Bright cyan
ç80-Green
ç82-Bright green
ç88-Dark red
ç124-Medium red
ç165-Magenta
ç196-Bright red
ç202-Orange-red
ç208-Orange
ç214-Amber / warm orange
ç244-Mid gray
ç59-Dark gray
ç15-White
çR-Reset
256-color palette. Any value from 0 to 255 is valid — use any standard xterm-256 chart to pick colors. The codes listed above are the ones used by APM's own output; they work well in most terminal themes.

StatsD

Coming soon. StatsD metric export is planned for an upcoming release.

Web GUI

APM ships a built-in real-time web dashboard. Enable it with gui_port in the daemon config:

apm.conf
daemon {
    gui_port  6789;         # 0 = disabled (default)
    gui_bind  127.0.0.1;   # use 0.0.0.0 to expose on the network
}

When the daemon starts with the GUI enabled it prints the access URL:

$ apm start
GUI: http://127.0.0.1:6789/

Views

TabDescription
WorkersLive table of all workers and instances — status, uptime, CPU sparkline, CPU%, RAM, restart count, error count. Per-worker stop / start / restart / reload-config buttons.
DashboardCustom metric panels (LED, counter, text, graph, gauge, heatmap) defined in the worker's dashboard { } config block. Workers without a dashboard block show a placeholder.
Live LogsPer-worker log stream replayed from a 200-line ring buffer on connect, then live. Includes both stdout and stderr. Clear, download, and pause controls.
Server InfoCPU model, thread count, speed, RAM, OS, kernel, architecture, uptime, network interfaces (IP, MAC, speed, RX/TX totals), load averages, and latency probes.

Server Info — latency

The Server Info page has a Latency section with two cards:

CardHow it worksInterval
Server → InternetTCP connect time to Google (8.8.8.8:443), Cloudflare (1.1.1.1:443), and Quad9 (9.9.9.9:443). Measures server-side outbound connectivity.On load, then every 30 s
Browser → ServerWebSocket round-trip time. The browser sends a ping frame; the server echoes it; the browser measures elapsed time.On load, then every 30 s

Disconnect behaviour

When the daemon stops, the WebSocket closes and the GUI immediately dims with a Session Ended overlay. Click Refresh Page to reconnect.

Dashboard

Each worker can expose a custom metric dashboard in the GUI. Define a dashboard { } block inside a worker config to create one. The dashboard is shown in the Dashboard tab when that worker is selected.

apm.conf
worker my-api {
    exec    node;
    params  server.js;
    server  http://127.0.0.1:3000;

    dashboard {
        name  My API;
        cols  6;
        rows  4;

        module {
            type  graph;
            id    1;
            name  Requests/sec;
            x 0; y 0; w 3; h 1;
        }
        module {
            type  gauge;
            id    2;
            name  CPU %;
            x 3; y 0; w 1; h 1;
            min 0; max 100; unit %;
        }
        module {
            type  counter;
            id    3;
            name  Total errors;
            x 4; y 0; w 1; h 1;
        }
        module {
            type  text;
            id    4;
            name  Last error;
            x 5; y 0; w 1; h 1;
        }
    }
}

Dashboard block fields

FieldDefaultDescription
nameworker nameTab label shown in the GUI.
cols6Number of grid columns.
rows3Number of grid rows.

Module fields

FieldRequiredDescription
typeyesModule type: led, counter, text, graph, gauge, heatmap.
idyesInteger ID. Must be unique within the dashboard. Used to route metrics from code to the right module.
x, yyesGrid position (0-based column, row).
w, hyesWidth and height in grid cells.
namenoLabel shown inside the module.
unitnoUnit suffix displayed next to the value (e.g. %, ms, req/s).
min, maxnoValue range. Used by gauge to scale the arc. Default 0–100.
colornoAccent color (hex or CSS value). Used by led, graph, gauge.
base_colornoBase / background color for heatmap cells.
sourcenoAuto-feed a built-in metric without writing code: cpu (CPU%), ram (RAM MB), conn (active connections), ior / iow (disk I/O read/write). When set, setDashValue calls for this module are ignored.

Module types

TypeDescription
ledColored indicator light. Green when value > 0, configurable color.
counterLarge numeric display. Shows cumulative value.
textSingle-line text value. Good for status strings or last-event messages.
graphScrolling bar chart. Newest bar on the right, auto-scaling.
gaugeArc gauge with min/max range and optional unit suffix.
heatmapGrid of colored cells representing a 2-D value distribution.

Sending metrics from code

Use apm.setDashValue(id, value, color?) in the Node.js connector to push a value to a dashboard module. This is distinct from apm.metric(), which is for StatsD-style system metrics only.

server.js
const ApmModule = require('./apm_module.node.js')
const apm = new ApmModule(async (session) => { /* handle connections */ })

// Push a number to module id 1 (graph)
apm.setDashValue(1, requestsPerSecond)

// Push a number with a dynamic color
apm.setDashValue(2, cpuPercent, cpuPercent > 80 ? '#ff5a5a' : '#4f8cff')

// Push a string to a text module (id 4)
apm.setDashValue(4, lastErrorMessage)

// LED on/off (1 = on, 0 = off)
apm.setDashValue(5, isHealthy ? 1 : 0, isHealthy ? '#47d16c' : '#ff5a5a')
ParameterDescription
idModule ID as defined in the dashboard { } config block.
valueNumber for gauge, graph, counter, led; string for text.
colorOptional CSS color string to override the module's configured color dynamically.

Counter vs gauge vs graph

Module typeHow value is applied
counterValue is added to the running total each call (delta). To reset, call setDashValue(id, -currentTotal).
gaugeAbsolute value replaces the current reading. Arc fills from min to max.
graphAbsolute value; appended as the newest bar on the right each call.
ledAny non-zero value turns the LED on; 0 turns it off.
textString value replaces the displayed text.
heatmapNumeric value 0–100 appended as the next cell.

Node.js connector

The Node.js connector (apm_module.node.js) provides a session API for APM-managed Node.js processes. IPC happens over stdin/stdout using binary frames — no Unix sockets required in the child.

download
$ curl -fsSL https://processmanager.dev/connectors/apm_module.node.js -o apm_module.node.js
update in-place
$ node apm_module.node.js -update
The module exports a class. Require it, then construct an instance passing your onConnect callback. The constructor sets up crash handlers and the stdin IPC listener immediately — call it once at startup before doing anything else.
server.js
const ApmModule = require('./apm_module.node.js')
const apm = new ApmModule(async (session) => {
    // session.protocol  — 'http' | 'ws' | 'tcp'
    // session.method    — HTTP method
    // session.path      — full path + query
    // session.headers   — request headers
    // session.remoteIp  — real client IP (proxy-aware)
    // session.cookies   — parsed cookie map

    session.write('Hello World', {
        'content-type': 'text/plain',
        'x-status': '200'
    })
    session.close()
})

If your worker doesn't handle sessions (e.g. a background job pushing dashboard metrics), pass an empty async function: new ApmModule(async () => {}).

Session API

Property / MethodDescription
session.protocol'http', 'ws', or 'tcp'
session.methodHTTP method (GET, POST, …)
session.pathFull path including query string
session.path_arrayDecoded path segments as an array
session.queryRaw query string parts
session.query_objectParsed { key: value | [values] }
session.cookiesParsed cookie map
session.headersRequest headers object
session.remoteIpClient IP. APM resolves from proxy headers when trust_proxy is set.
session.sessionIdUnique per-connection ID
session.instanceIdAPM_INDEX of this instance (0-based)
session.sessionType'new' for fresh connections
session.sessionDataFree-form object. Persists across session callbacks. Use saveSessionData() to persist across rolling restarts.
session.activetrue while the connection is open
session.onDataSet inside the callback. Called with (data, isBinary) for incoming data (WebSocket frames, TCP bytes).
session.onCloseSet inside the callback. Called when connection closes.
session.write(data, headers?)Send HTTP response body / WebSocket frame. Pass headers object on first HTTP write to set status and headers.
session.close(code?, reason?)Close the connection. HTTP status close or WebSocket close frame.
session.writeRaw(data)Send raw bytes, bypassing HTTP/WebSocket framing. For TCP or low-level use.
session.saveSessionData()Persist sessionData in the daemon. Survives rolling restart — new instance receives the same data.

Instance methods

MethodDescription
apm.setDashValue(id, value, color?)Push a value to a dashboard module. id is the integer module ID from the config. value is a number for gauge / graph / counter / led, or a string for text. color is an optional CSS color override.
apm.metric(name, value, type?)Send a StatsD-style metric. name is a dot-separated string (e.g. 'req.ok'). type: 'counter' (default, summed per second), 'gauge' (last value), 'timing' (averaged). Visible in StatsD export.
apm.instanceIdAPM_INDEX of this process instance (0-based string).

Environment variables

APM injects the following into managed child processes:

VariableDescription
APMSet to 1. The connector checks for this and exits if not present.
APM_INDEX0-based instance index. Only injected when env_index is configured.

WebSocket example

ws-server.js
const ApmModule = require('./apm_module.node.js')
const apm = new ApmModule(async (session) => {
    if (session.protocol !== 'ws') {
        session.close(400)
        return
    }

    session.onData = (data, isBinary) => {
        // echo back
        session.write(data)
    }

    session.onClose = () => {
        console.log('disconnected', session.sessionId)
    }
})

PHP / Python / Perl / Lua connectors

Connectors for other languages follow the same pattern: drop a single file into your project, require / include it, and pass an onConnect callback. All connectors implement the full APM IPC protocol over stdin/stdout — no extra dependencies beyond what's noted on the connectors page.

download any connector
$ curl -fsSL https://processmanager.dev/connectors/apm_module.php   -o apm_module.php
$ curl -fsSL https://processmanager.dev/connectors/apm_module.py    -o apm_module.py
$ curl -fsSL https://processmanager.dev/connectors/ApmModule.pm     -o ApmModule.pm
$ curl -fsSL https://processmanager.dev/connectors/apm_module.lua   -o apm_module.lua

Each connector file also supports self-update — run it with -update to fetch the latest version from the server (e.g. php apm_module.php -update). See the connectors page for version info, MD5 checksums, and per-language update commands.

All connectors expose the same setDashValue(id, value, color?) and metric(name, value, type?) methods as the Node.js connector. Refer to the Dashboard section for how to define dashboard modules in the config and push values to them.

What's new in v1.3.0

Improvement — Config reload syncs all worker parameters

apm reload previously only applied exec, path, params, and instances to a running worker. All other fields (watcher patterns, restart settings, rolling delay, kill timeout, TLS, session, proxy flags, etc.) were silently ignored — the old values stayed active until the worker was fully stopped and restarted.

All fields are now applied live before the rolling restart. The file watcher is additionally closed and reopened if watch, watch_ignore, or watch_delay changed, so the new patterns take effect immediately without a full stop/start.

Bug fix — watch_ignore with multiple patterns

Config values with multiple comma-separated entries (e.g. watch_ignore web_*, node_modules) are stored internally as a string list. A type mismatch caused watch_ignore to be silently discarded when more than one pattern was given — all matching files would trigger a restart, including those in excluded directories. This is now fixed: all patterns are passed through correctly.

Bug fix — Orphaned processes on rapid file changes

When multiple file-change events fired in quick succession, the debounce timer could start a second WatcherRestart goroutine while the first was still running. Both goroutines would wait for the child to exit, then both call Start() — spawning two processes for the same slot and orphaning the first. This is now prevented with a per-worker mutex that serialises concurrent watcher-triggered restarts.