COHERIX
Coherence across scales, systems, and intelligence

CMCI Signal API

Real-time structural coherence monitoring for multi-agent AI pipelines and complex systems. Push a four-dimensional observation; get back a full coherence state with regime, safety margin, and five independent defect axes that localise where coherence is breaking.

What the API does

The CMCI Signal API exposes Coherix's coherence engine as a stateful REST service. Your pipeline pushes a small observation vector at whatever cadence makes sense (1–10 Hz is typical for live monitoring). The engine maintains a Kalman-filtered coherence state per session and returns the current regime plus actionable defect values on every observation.

It is content-agnostic: your system owns the mapping from raw signals (message flow, tool calls, latencies, agent outputs, …) to the four-dimensional observation vector y = (C, S, R, A). The engine does the coherence math and tells you whether the system is healthy, drifting, or breaking down.

No LLM calls. Deterministic on identical inputs (bit-identical across runs). Production smoke test: 23/23 passing · p95 latency 143 ms · lock v6-7-validated.

30-second quick start

Three calls to get a coherence state back:

BASH
# 1. Create a session
curl -X POST https://coherix.ca/cmci/signal/session \
  -H "Authorization: Bearer sk_live_your_key"
# → {"session_id": "c1a4…", "attractor_order": ["C","S","R","A"], …}

# 2. Push an observation (Cohesion, Salience, Rigidity, Articulation ∈ [0,1])
curl -X POST https://coherix.ca/cmci/signal/session/c1a4.../observe \
  -H "Authorization: Bearer sk_live_your_key" \
  -H "Content-Type: application/json" \
  -d '{"y": [0.85, 0.15, 0.85, 0.85]}'
# → {"mu_scalar": 0.72, "margin": +0.18, "regime": "COHERENT", "defects": {…}}

# 3. Close the session when done
curl -X DELETE https://coherix.ca/cmci/signal/session/c1a4... \
  -H "Authorization: Bearer sk_live_your_key"

That's the full REST loop. For continuous streaming, the /stream WebSocket keeps a persistent connection and accepts observations in either direction.

What comes back — understanding the response

Every /observe and /state call returns the same schema. Four top-line metrics plus five defect axes:

Top-line metrics

Overall Coherence
mu_scalar (μ) ∈ [−0.3, 1.0]
How aligned the system's components are right now. Higher is better. Above 0.6 is healthy.
Safety Margin
margin ∈ [−0.3, +0.3]
Distance to the coherence-breakdown boundary. Positive = inside the safe window. Negative = past the edge.
Structural Stability
Lambda (Λ) ∈ [0, ~5]
How robust the current configuration is to perturbations. Rises when the system is settled, falls during transitions.
Active Dimensions
n_eff ∈ [0, 5]
How many independent axes are actually contributing. Low n_eff during collapse = everything correlates with one failure mode.

Five defect axes (defects.d1 … d5)

Each d_i ∈ [0, 1] localises a specific mode of multi-agent breakdown. A rising d_i tells you which aspect of coherence is failing, so mitigation can target the right dimension.

Per-axis thresholds (empirically calibrated). The five axes operate at very different numerical scales, so a single uniform cut-off is not appropriate. Each axis has its own warn and critical thresholds derived from the Method B calibration on the reference demo scenario:

Axis warn critical
attention_dispersion0.290.47
goal_alignment0.530.63
decision_stability0.480.68
strategic_coherence0.0470.10
cross_role_coordination0.0890.11

These values are demo-calibrated on the reference scenario shipped with the public live demo. Your own pipeline may exhibit different empirical ranges — recalibrate on your own data before putting CMCI into an operational decision loop. The dashboard highlights each cell warn/critical using the warn-relative severity (value / axis_warn), so axes at different numerical scales are compared on a common footing.

Attention Dispersion
attention_dispersion
Agents spreading attention thin across too many parallel threads.
Goal Alignment
goal_alignment
Drift between what agents are doing and the shared objective.
Decision Stability
decision_stability
Flip-flopping or recursive loops — a strong signal for hallucination risk in LLM agents.
Strategic Coherence
strategic_coherence
Local agent actions drifting from the overall plan.
Cross-role Coordination
cross_role_coordination
Role boundaries collapsing — agents stepping on each other.

Regime labels

Optional observation scalars

Beyond the four-dimensional y = (C, S, R, A) vector, /observe accepts three optional scalars that refine how the engine reads each observation. All three are floats on [0, 1]. They default to sensible values if omitted, but populating them deliberately gives the engine access to structural information it cannot infer from y alone — especially around self-awareness, undoability, and event-order disruption.

Self-referentiality
self_referentiality ∈ [0, 1] · default 0.5
How much the system is attending to its own state. A system that reflects on itself (evaluates, audits, reviews, critiques) has a different structural signature than one operating on autopilot. Raising this scalar tells the engine that meta-level activity is present; leaving it at zero tells the engine the system is not self-observing.
Reversibility
reversibility ∈ [0, 1] · default 0.0
How undoable recent decisions are. When the system can reopen, revert, or redo recent state changes without cost, coherence degradations are less dangerous — the system can back out. Low reversibility makes the same degradations structurally more consequential.
Sequence disruption
seq_disruption ∈ [0, 1] · default 0.0
How disturbed the ordering or pacing of events has been. Missed deadlines, out-of-order handoffs, skipped phases, cadence breaks. Raising this tells the engine that the system's temporal structure has been perturbed, which changes the interpretation of a given defect pattern.

How to populate them — domain examples

These scalars are domain-specific. The mapper in your pipeline is responsible for deriving them from whatever signals exist in your system. Two representative cases below.

Multi-agent AI pipeline

self_referentiality
Share of pipeline activity dedicated to evaluating its own behaviour: automated eval runs, self-critique prompts, audit-log review, agent output validation, A/B-comparison against a known-good baseline. Healthy pipelines spend non-trivial budget here; silently-degrading ones stop looking at themselves.
reversibility
Share of recent actions that can be cheaply undone: checkpointing density, rollback cadence, staged deployment with traffic shifting, feature flags. Pipelines that commit irreversibly on every step have low reversibility.
seq_disruption
Rate of out-of-order handoffs between agents, timeouts that forced phase-skipping, retries that broke pipeline cadence, external latency spikes that disrupted scheduled tasks.

Open-source project governance

self_referentiality
RFC / ADR discussion density, share of issues labelled meta / governance / process, rate of retrospective commits — balanced (negatively) by the share of issues labelled stale / concern / tech-debt that remain unresolved. A persistently unresolved backlog of structural concerns is the classic "missing reflection" signature (OpenSSL before Heartbleed is the canonical case).
reversibility
Share of recent major decisions (RFCs, architectural changes, breaking-change releases) that can still be reopened without forking. Projects that harden decisions into immutable commitments have low reversibility; those that keep the door open to revisiting have high.
seq_disruption
Release cadence disruption (std-dev of inter-release intervals over the window), security disclosures that forced out-of-band releases, ecosystem-wide shifts that disturbed the planned roadmap, maintainer departures that broke review sequences.

Effect on engine output

These scalars modulate how the engine interprets the same (C, S, R, A) vector — they do not replace it. Concretely:

If you leave all three at their defaults, the engine still works — it simply treats the pipeline as mildly self-aware (self_referentiality = 0.5), fully irreversible (reversibility = 0), and temporally undisturbed (seq_disruption = 0). Populating them deliberately yields noticeably more nuanced regime transitions, especially across long observation windows.

Endpoint reference

Base URL: https://coherix.ca

POST
/cmci/signal/session AUTH
Create a new coherence session. Returns session_id (UUID) and the fixed attractor order ["C","S","R","A"]. Concurrent-session cap applies per API key.
POST
/cmci/signal/session/{sid}/observe AUTH
Push an observation y = [C, S, R, A]. Optional fields: self_referentiality, reversibility, seq_disruption (all ∈ [0,1]). Returns the full state response.
GET
/cmci/signal/session/{sid}/state AUTH
Fetch current state without modifying it. Idempotent — safe to poll.
GET
/cmci/signal/session/{sid}/trajectory_features?window=30 AUTH
Descriptive measurements over the last window observations (2–500, default 30). Returns margin stats (mean, min, trend, stability, breach rate), regime churn and mode, dominant-defect persistence, mean pairwise defect co-movement, per-axis max-over-warn ratios, and max simultaneous defects exceeding their warn thresholds. Does not classify the trajectory — exposes the measurements a classifier would need. Deterministic; per-axis warns come from the caller's client profile.
DELETE
/cmci/signal/session/{sid} AUTH
Close the session and free its concurrent-session slot for your key.
WS
/cmci/signal/stream/{sid}?api_key=<key> AUTH
Bidirectional WebSocket. Send {"y": [...]}, {"type":"get_state"}, or {"type":"ping"}. Auth passes via query string because browsers cannot set headers on WS upgrades.
GET
/cmci/signal/health PUBLIC
Unauthenticated status endpoint. Reports engine version, active sessions, rate-limit config. Safe to monitor externally.
GET
/cmci/signal/dashboard PUBLIC
Live monitoring dashboard. Free to browse; running an actual session still requires an API key.

Integration examples

Pick your stack:

The official SDK (cmci_agent_sdk.py, stdlib only — no heavy deps) handles session lifecycle, optional multi-agent message mapping, and retries.

PYTHON
from cmci_agent_sdk import CMCISession

session = CMCISession(
    base_url="https://coherix.ca",
    api_key="sk_live_your_key",
    team_id="my_pipeline",
)

# Option A — you already have the (C, S, R, A) vector
state = session.observe(y=[0.85, 0.15, 0.85, 0.85])
print(f"Coherence: {state['mu_scalar']:.3f}  Margin: {state['margin']:+.3f}  Regime: {state['regime']}")

# Option B — feed in raw multi-agent messages, SDK maps them to y
messages = [{"role": "user", "content": "..."}, ...]
state = session.observe_from_messages(messages, goal="ship feature X")

if state["alert_level"] == "CRITICAL":
    worst = max(state["defects"].items(), key=lambda kv: kv[1])
    alert_ops(f"Worst axis: {worst[0]} = {worst[1]:.2f}")

session.close()
BASH
KEY="sk_live_your_key"
BASE="https://coherix.ca"

# Create session, capture the UUID
SID=$(curl -sX POST "$BASE/cmci/signal/session" \
      -H "Authorization: Bearer $KEY" | jq -r .session_id)

# Push observations in a loop
for i in {1..10}; do
  curl -sX POST "$BASE/cmci/signal/session/$SID/observe" \
    -H "Authorization: Bearer $KEY" \
    -H "Content-Type: application/json" \
    -d '{"y":[0.85,0.15,0.85,0.85]}' | jq .margin
  sleep 1
done

# Clean up
curl -sX DELETE "$BASE/cmci/signal/session/$SID" \
     -H "Authorization: Bearer $KEY"

Wrap an AutoGen GroupChat so every message routed through the manager updates a CMCI session. Use alert_level to decide when to break the loop or insert a correction agent.

PYTHON
from autogen import GroupChat, GroupChatManager
from cmci_agent_sdk import CMCISession

cmci = CMCISession(
    base_url="https://coherix.ca",
    api_key="sk_live_your_key",
    team_id="autogen_research_team",
)

def cmci_observer(recipient, messages, sender, config):
    state = cmci.observe_from_messages(messages[-10:])
    if state["alert_level"] in ("WARNING", "CRITICAL"):
        print(f"[CMCI] {state['regime']} — margin={state['margin']:+.2f}")
    return False, None   # don't interrupt the chat

groupchat = GroupChat(agents=[...], messages=[], max_round=20)
manager   = GroupChatManager(groupchat=groupchat)
manager.register_reply([manager], reply_func=cmci_observer)

# Run the chat — CMCI observes every message asynchronously
user_proxy.initiate_chat(manager, message="Build plan for feature X")

cmci.close()

No SDK, no deps — just urllib from Python stdlib:

PYTHON
import json, urllib.request

BASE, KEY = "https://coherix.ca", "sk_live_your_key"

def call(method, path, body=None):
    req = urllib.request.Request(
        BASE + path,
        data=json.dumps(body).encode() if body else None,
        method=method,
    )
    req.add_header("Authorization", f"Bearer {KEY}")
    req.add_header("User-Agent", "my-app/1.0")   # required by Cloudflare
    if body: req.add_header("Content-Type", "application/json")
    with urllib.request.urlopen(req) as r:
        return json.loads(r.read())

sid   = call("POST", "/cmci/signal/session")["session_id"]
state = call("POST", f"/cmci/signal/session/{sid}/observe",
             {"y": [0.85, 0.15, 0.85, 0.85]})
print(state["regime"], state["mu_scalar"])
call("DELETE", f"/cmci/signal/session/{sid}")

Error codes

All errors return JSON {"detail": "…"}. Common codes:

StatusWhenAction
401Missing Authorization header on an authed endpoint.Send Authorization: Bearer <key>.
403Unknown API key, or trying to access a session owned by a different key.Verify your key; sessions are strictly tenant-isolated.
404Session ID not found — expired (TTL 2 h) or already deleted.Create a new session.
422Invalid payload — y must be exactly 4 floats in [0, 1].Check the attractor order ["C","S","R","A"].
429Rate limit exceeded (60 observations/min per key) or concurrent-session cap reached (10/key).Respect the Retry-After header; close idle sessions.

Rate limits & performance

Need higher limits for a production load? Contact us — pilot keys can be provisioned with custom quotas.

Get an API key

The CMCI Signal API is currently in pilot phase. We're onboarding a small number of teams running multi-agent AI systems where structural coherence matters (research pipelines, autonomous operations, complex tool-use orchestration).

Keys are free during pilot. In exchange, we ask for honest feedback and permission to use anonymized latency/reliability metrics to validate the system.

Request pilot access

Send a short note about your system: what agents you run, what "coherence failing" looks like for you today, and what you'd want the signal to trigger.

Email us See the live dashboard