Socket & runtime files
APM's CLI and daemon talk over an abstract Unix domain socket. Abstract sockets are kernel-managed: they exist only while the daemon is alive and leave no file on disk.
| Item | Path / Name | Notes |
|---|---|---|
| CLI socket | @apm | Abstract socket (\x00apm internally). Visible with ss -xl | grep apm. No file — kernel cleans it up on daemon exit. |
| PID file | ~/.apm/apm.pid | Removed on clean exit. If the daemon crashes it stays behind; APM detects and replaces it on next start. When installed as a system service, /root/.apm/apm.pid. |
| Config (user) | ~/.apm/config.conf | Loaded automatically on daemon start if it exists. |
| Config (system) | /etc/apm/apm.conf | Also loaded automatically. Created by apm install. Drop worker configs into /etc/apm/apm.conf.d/. |
| Log (user) | ~/.apm/apm.log | Written when APM is running as a regular user. |
| Log (service) | /var/log/apm.log | Written when running as the systemd service installed by apm install. |
| Runtime dir | ~/.apm/ | Created automatically on first run. |
Socket access control
By default only the user that started the daemon (and root) can issue CLI commands. Add an OS group with allow_group in the daemon {} block to grant access to other users:
daemon {
allow_group apm-admin; # members of this group may run apm CLI commands
}
# Add a user to the group:
$ sudo usermod -aG apm-admin alice
Checking the socket
# List abstract Unix sockets — look for @apm
$ ss -xl | grep apm
# Check if the daemon is alive
$ apm status
# PID of the running daemon
$ cat ~/.apm/apm.pid
Nginx integration
APM runs its own proxy layer, so Nginx sits in front as an SSL terminator and vhost router, forwarding to APM's bound ports. Always set trust_proxy true in the worker config so APM's Vanguard firewall sees real client IPs.
trust_proxy true;.
HTTP reverse proxy
APM worker on port 3000, Nginx terminates SSL and forwards HTTP traffic.
server {
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# apm.conf — worker side
worker {
name myapp;
exec node;
params server.js;
server http://127.0.0.1:3000;
trust_proxy true;
}
WebSocket proxy
WebSocket upgrades require the Upgrade and Connection headers to be forwarded, plus a long read timeout so idle connections aren't killed.
server {
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
location /ws {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 3600s; # keep idle WS connections alive
proxy_send_timeout 3600s;
}
}
# apm.conf — worker side
worker {
name myapp;
exec node;
params server.js;
server ws://127.0.0.1:3001;
trust_proxy true;
}
HTTP + WebSocket on one domain
Route by path prefix — REST API on /, WebSocket on /ws. APM listens on two separate ports; Nginx splits traffic.
server {
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
# WebSocket endpoint — must come before location /
location /ws {
proxy_pass http://127.0.0.1:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
# HTTP API / everything else
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# apm.conf — worker side
worker {
name myapp;
exec node;
params server.js;
server http://127.0.0.1:3000, ws://127.0.0.1:3001;
trust_proxy true;
}
Unix socket upstream
Instead of a TCP loopback port you can have APM bind its HTTP server to a Unix domain socket file — no port needed, no TCP handshake, marginally lower latency on the same host. Nginx connects with proxy_pass http://unix:… and an upstream block.
./myapp.sock — APM resolves it against the worker's path directory. For example, if path is /opt/myapp, the socket will be at /opt/myapp/myapp.sock. Use that absolute path in your Nginx config. If you prefer a dedicated directory (e.g. /run/apm/), create it first: sudo mkdir -p /run/apm.
# Define the upstream once, reuse across location blocks
# Use the absolute path that matches the worker's path + socket filename
upstream apm_myapp {
server unix:/opt/myapp/myapp.sock;
keepalive 32;
}
server {
listen 443 ssl;
server_name myapp.example.com;
ssl_certificate /etc/letsencrypt/live/myapp.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.example.com/privkey.pem;
location / {
proxy_pass http://apm_myapp;
proxy_http_version 1.1;
proxy_set_header Connection ""; # enable upstream keep-alive
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# apm.conf — relative path resolves to worker's path directory
worker {
name myapp;
exec node;
params server.js;
path /opt/myapp;
server http://unix:./myapp.sock; # → /opt/myapp/myapp.sock
trust_proxy true;
}
www-data) needs read+write access. Either run both under the same user or add Nginx's user to the APM worker's group.
HTTP redirect to HTTPS
server {
listen 80;
server_name myapp.example.com;
return 301 https://$host$request_uri;
}
Apache integration
Apache uses mod_proxy, mod_proxy_http, and mod_proxy_wstunnel for reverse proxying. Enable them once, then use VirtualHost blocks per app.
$ sudo a2enmod proxy proxy_http proxy_wstunnel rewrite headers ssl
$ sudo systemctl reload apache2
trust_proxy true; so Vanguard sees real client IPs from X-Forwarded-For, not the loopback address.
HTTP reverse proxy
<VirtualHost *:443>
ServerName myapp.example.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/myapp.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/myapp.example.com/privkey.pem
ProxyPreserveHost On
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP "%{REMOTE_ADDR}e"
</VirtualHost>
# apm.conf — worker side
worker {
name myapp;
exec node;
params server.js;
server http://127.0.0.1:3000;
trust_proxy true;
}
WebSocket proxy
mod_proxy_wstunnel handles the Upgrade handshake. The RewriteRule pattern matches the WebSocket path and rewrites the scheme to ws://.
<VirtualHost *:443>
ServerName myapp.example.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/myapp.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/myapp.example.com/privkey.pem
RewriteEngine On
# Upgrade WebSocket connections on /ws
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/ws(/.*)?$ ws://127.0.0.1:3001/ws$1 [P,L]
ProxyPreserveHost On
ProxyPass /ws ws://127.0.0.1:3001/ws
ProxyPassReverse /ws ws://127.0.0.1:3001/ws
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP "%{REMOTE_ADDR}e"
</VirtualHost>
# apm.conf
worker {
name myapp;
exec node;
params server.js;
server ws://127.0.0.1:3001;
trust_proxy true;
}
HTTP + WebSocket on one domain
WS path matched first via RewriteRule, everything else falls through to the HTTP proxy.
<VirtualHost *:443>
ServerName myapp.example.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/myapp.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/myapp.example.com/privkey.pem
RewriteEngine On
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/ws(/.*)?$ ws://127.0.0.1:3001/ws$1 [P,L]
ProxyPreserveHost On
ProxyPass /ws ws://127.0.0.1:3001/ws
ProxyPassReverse /ws ws://127.0.0.1:3001/ws
ProxyPass / http://127.0.0.1:3000/
ProxyPassReverse / http://127.0.0.1:3000/
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP "%{REMOTE_ADDR}e"
</VirtualHost>
# apm.conf
worker {
name myapp;
exec node;
params server.js;
server http://127.0.0.1:3000, ws://127.0.0.1:3001;
trust_proxy true;
}
Unix socket upstream
Apache uses a pipe syntax to connect via a socket file: unix:/path/to/sock|http://localhost/. The http://localhost/ part sets the Host header sent upstream — it is not a TCP connection. Use the absolute path that matches your worker's path + socket filename.
<VirtualHost *:443>
ServerName myapp.example.com
SSLEngine on
SSLCertificateFile /etc/letsencrypt/live/myapp.example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/myapp.example.com/privkey.pem
ProxyPreserveHost On
ProxyPass / "unix:/opt/myapp/myapp.sock|http://localhost/"
ProxyPassReverse / "unix:/opt/myapp/myapp.sock|http://localhost/"
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Real-IP "%{REMOTE_ADDR}e"
</VirtualHost>
# apm.conf — relative path resolves to worker's path directory
worker {
name myapp;
exec node;
params server.js;
path /opt/myapp;
server http://unix:./myapp.sock; # → /opt/myapp/myapp.sock
trust_proxy true;
}
HTTP redirect to HTTPS
<VirtualHost *:80>
ServerName myapp.example.com
RewriteEngine On
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]
</VirtualHost>
Enable the site
$ sudo a2ensite myapp
$ sudo apache2ctl configtest
$ sudo systemctl reload apache2
PHP / Python / Perl / Lua connectors
All four connectors implement the same IPC protocol as the Node.js module. Each is a single file — drop it into your project directory.
# Node.js
$ curl -fsSL https://processmanager.dev/connectors/apm_module.node.js -o apm_module.node.js
# PHP
$ curl -fsSL https://processmanager.dev/connectors/apm_module.php -o apm_module.php
# Python
$ curl -fsSL https://processmanager.dev/connectors/apm_module.py -o apm_module.py
# Perl
$ curl -fsSL https://processmanager.dev/connectors/ApmModule.pm -o ApmModule.pm
# Lua
$ curl -fsSL https://processmanager.dev/connectors/apm_module.lua -o apm_module.lua
Each connector can update itself in-place after the first install — no curl required:
# Node.js
$ node apm_module.node.js -update
# PHP
$ php apm_module.php -update
# Python
$ python3 apm_module.py -update
# Perl
$ perl ApmModule.pm -update
# Lua
$ lua apm_module.lua -update
PHP 7.4+
No external dependencies — json_encode / json_decode are always available in PHP.
require_once __DIR__ . '/apm_module.php';
$apm = new ApmModule(function(ApmSession $s) {
// $s->protocol — 'http' | 'ws' | 'tcp'
// $s->method — HTTP method string
// $s->path — request path
// $s->headers — associative array
// $s->remoteIp — real client IP
// $s->cookies — associative array
// $s->instanceId — APM_INDEX env var
$s->write('Hello World', ['x-status' => '200', 'content-type' => 'text/plain']);
$s->close();
});
$apm->run();
For WebSocket, assign $s->onData before returning from the connect callback:
$apm = new ApmModule(function(ApmSession $s) {
$s->onData = function(string $data, bool $isBin) use ($s) {
$s->write($data); // echo back
};
$s->onClose = function() { /* cleanup */ };
});
$apm->run();
Python 3.6+
No external dependencies — json and struct are part of the standard library.
from apm_module import ApmModule, ApmSession
def on_connect(s: ApmSession):
# s.protocol — 'http' | 'ws' | 'tcp'
# s.method — HTTP method string
# s.path — request path
# s.headers — dict
# s.remote_ip — real client IP
# s.cookies — dict
# s.instance_id — APM_INDEX env var
s.write(b'Hello World', {'x-status': '200', 'content-type': 'text/plain'})
s.close()
ApmModule(on_connect).run()
WebSocket echo example:
def on_connect(s: ApmSession):
def on_data(data: bytes, is_binary: bool):
s.write(data)
s.on_data = on_data
s.on_close = lambda: None
ApmModule(on_connect).run()
Perl 5.10+
Requires the JSON module: cpan install JSON or apt install libjson-perl.
use lib '.';
use ApmModule;
my $apm = ApmModule->new(sub {
my ($s) = @_;
# $s->{protocol} — 'http' | 'ws' | 'tcp'
# $s->{method} — HTTP method
# $s->{path} — request path
# $s->{headers} — hashref
# $s->{remote_ip} — real client IP
# $s->{cookies} — hashref
$s->write('Hello World', { 'x-status' => '200', 'content-type' => 'text/plain' });
$s->close();
});
$apm->run();
WebSocket echo example:
my $apm = ApmModule->new(sub {
my ($s) = @_;
$s->{on_data} = sub {
my ($data, $is_bin) = @_;
$s->write($data);
};
$s->{on_close} = sub { };
});
$apm->run();
Lua 5.3+
Requires lua-cjson: luarocks install lua-cjson or apt install lua-cjson.
local ApmModule = require('apm_module')
ApmModule.new(function(s)
-- s.protocol — 'http' | 'ws' | 'tcp'
-- s.method — HTTP method string
-- s.path — request path
-- s.headers — table
-- s.remote_ip — real client IP
-- s.cookies — table
-- s.instance_id — APM_INDEX env var
s:write('Hello World', { ['x-status'] = '200', ['content-type'] = 'text/plain' })
s:close()
end):run()
WebSocket echo example:
ApmModule.new(function(s)
s.on_data = function(data, is_binary)
s:write(data)
end
s.on_close = function() end
end):run()
Common session API
All connectors expose the same session object fields and methods:
| Field / method | Type | Description |
|---|---|---|
protocol | string | http, ws, or tcp |
method | string|null | HTTP method (GET, POST, …) |
path | string | Request path |
path_array | array | Path split on / |
query | string | Raw query string |
query_object | object/dict | Parsed query params |
headers | object/dict | Request headers (lower-case keys) |
cookies | object/dict | Parsed cookies |
remote_ip | string | Real client IP (first value from remoteAddress) |
session_id | string | Unique session identifier |
session_data | object/dict | Persistent data, survives rolling restart |
instance_id | string | Worker index (APM_INDEX env) |
on_data | callable | Called with (data, is_binary) when data arrives |
on_close | callable | Called when the connection closes |
write(data, headers?) | method | Send response body or WebSocket frame; pass headers on first call |
write_raw(data) | method | Send raw bytes bypassing framing |
close(code?, reason?) | method | Close connection (HTTP status or WS close frame) |
save_session_data() | method | Persist session_data in the APM daemon |
active() | bool | Returns false after connection has been closed |
IPC protocol
APM communicates with each managed worker process over stdin / stdout using a lightweight binary framing protocol.
No Unix sockets, no network stack — just pipes. Every frame starts with the byte 0x05 followed by a big-endian uint32 length, a UTF-8 JSON header, the separator byte 0x03, and an optional binary payload.
┌──────┬────────────────────┬──────────────────────┬──────┬─────────────────┐
│ 0x05 │ uint32 big-endian │ JSON header │ 0x03 │ binary payload │
│ 1 B │ 4 B │ json_len bytes │ 1 B │ binary_len B │
└──────┴────────────────────┴──────────────────────┴──────┴─────────────────┘
uint32 = json_len + binary_len # 0x03 separator NOT counted
frame_total = 1 + 4 + json_len + 1 + binary_len # = uint32 + 6
Reading algorithm:
# wait for at least 5 bytes
if buf[0] != 0x05: resync()
payload_len = uint32_be(buf[1:5])
frame_len = payload_len + 6 # wait for this many bytes total
sep = frame.index(0x03, offset=5)
header = json.parse(frame[5 : sep])
binary = frame[sep+1 :]
┌──────┬────────────────────┬──────────────────────┬──────┬─────────────────┐
│ 0x05 │ uint32 big-endian │ JSON header │ 0x03 │ binary payload │
│ 1 B │ 4 B │ json_len bytes │ 1 B │ binary_len B │
└──────┴────────────────────┴──────────────────────┴──────┴─────────────────┘
uint32 = json_len + 1 + binary_len # 0x03 separator IS counted
frame_total = 1 + 4 + json_len + 1 + binary_len
Writing algorithm:
json_bytes = json.encode(header)
length = len(json_bytes) + 1 + len(binary) # note: +1 for 0x03
frame = b'\x05' + uint32_be(length) + json_bytes + b'\x03' + binary
stdout.write(frame)
stdout.flush()
APM → worker: JSON header fields
The first frame for a session carries full connection metadata. Subsequent frames carry only the fields that have changed.
| Field | Type | Description |
|---|---|---|
_sessionId | string | Unique identifier for this connection. Present on every frame. |
_type | string | data — body / WebSocket frame data arrived. chunk — streaming chunk. event — lifecycle event. |
_event | string | When _type=event: connectionClosed — peer disconnected. |
_sessionType | string | new for a fresh connection. |
_sessionData | object | Persisted data from a previous session (populated after rolling restart). |
protocol | string | http, ws, or tcp. |
method | string | HTTP method (GET, POST, …). Null for non-HTTP. |
path | string | Request path (without query string). |
path_array | array | Path split on /, decoded. |
query | string | Raw query string. |
query_object | object | Parsed query params. Multi-value keys become arrays. |
headers | object | Request headers with lower-case keys. |
cookies | object | Parsed cookie map. |
remoteAddress | string | Comma-separated client IP chain (first value is the real client IP). |
dataType | string | text or binary (on data/chunk frames). |
Worker → APM: _command values
Every outgoing frame must include _session (the session ID) and _command.
| Command | Extra fields | Description |
|---|---|---|
write |
dataType (text|binary)Any HTTP response header, e.g. content-typex-status — HTTP status code as string |
Send HTTP response body or WebSocket frame. Headers are only processed on the first write per session. |
writeRaw |
— | Send raw bytes directly to the socket, bypassing HTTP/WS framing. For TCP or custom protocols. |
closeConnection |
code — HTTP status or WS close code (integer)_reason — optional reason string |
Close the connection. Sends HTTP response with the given status, or a WebSocket close frame. |
saveSessionData |
_sessionData — object to persist |
Store arbitrary data in the daemon. On rolling restart the replacement worker receives it in _sessionData. |
metric |
name — metric name stringvalue — numbertype — counter | gauge | timing |
Emit a custom metric to the APM metrics pipeline. Not tied to a session — _session can be empty. |
Minimal custom connector skeleton
# pseudo-code — replace pack/unpack with your language's equivalent
def send_frame(header, binary=b''):
j = json.encode({**header, '_session': session_id})
length = len(j) + 1 + len(binary) # +1 for 0x03
stdout.write(b'\x05' + pack('>I', length) + j + b'\x03' + binary)
stdout.flush()
def read_frame():
while len(buf) < 5: buf += stdin.read(4096)
payload_len = unpack('>I', buf[1:5])[0]
frame_len = payload_len + 6
while len(buf) < frame_len: buf += stdin.read(4096)
frame, buf = buf[:frame_len], buf[frame_len:]
sep = frame.index(0x03, 5)
header = json.decode(frame[5:sep])
binary = frame[sep+1:]
return header, binary
Troubleshooting
SyntaxError: Unexpected token '??=' (Node.js version mismatch)
Symptom
SyntaxError: Unexpected token '??='
at wrapSafe (internal/modules/cjs/loader.js:...)
at /opt/myapp/node_modules/@redis/client/dist/lib/client/index.js:727
The worker crashes immediately with a syntax error inside a node_modules package
(@redis/client, @prisma/client, and similar modern packages commonly trigger this).
The error is not a bug in APM or the package — it means the worker is running under
Node.js 14 or older, which does not support the nullish coalescing assignment
operator (??=) introduced in Node.js 15.
Why this happens under APM
When APM is started as a systemd service, it inherits the minimal systemd
PATH (/usr/bin:/bin), which may resolve node to a
system-installed Node.js 14 even when a newer version is available via nvm,
n, or a manual install in /usr/local/bin. Running the same worker
manually in your shell works because your shell's PATH picks up the newer version.
Fix 1 — set user or let APM infer it from the work directory
When APM runs as root and a worker has a user field (or a path whose
owner is a non-root user), APM automatically wraps the worker in that user's login shell.
This sources ~/.profile / ~/.bash_profile, loading nvm,
PATH, and any other env vars the user has configured — exactly as if you
su -'d to that user and ran the command yourself.
worker {
name myapp;
exec node; # resolved from the user's PATH after profile loads
params /opt/myapp/server.js;
path /opt/myapp; # if owner is non-root, user is inferred automatically
user deploy; # optional: explicit user override
restart true;
}Fix 2 — specify the full Node.js path in the worker config
If you are not running APM as root, user-switching is unavailable. Specify the full path instead:
which node # in a shell where the right version is active
node --version # confirm it is 15+worker {
exec /usr/local/bin/node; # full path — not just "node"
params /opt/myapp/server.js;
...
}Fix 3 — upgrade the system Node.js
If the old Node.js is the default system package, upgrade it via the
NodeSource repository
or use n / nvm to install a current LTS release system-wide.