Skip to main content

Starter Runtime Tutorial

The manifesto-starter-runtime/ directory contains a standalone, self-contained runtime with its own main.py, module package, and interaction endpoint. This is different from the root-level system — it's designed as a ready-to-fork template.

Architecture

manifesto-starter-runtime/
main.py ← FastAPI app with /pulse and /status
modules/
__init__.py ← Package exports all 4 modules
scheduler.py ← 234 LOC — tick cycle + watchdog
reasoning.py ← 343 LOC — hybrid LLM + rules
memory.py ← 374 LOC — SQLite storage + 3 types
monitor.py ← 145 LOC — sanitization + rate limiting

The /pulse Endpoint

The primary interaction loop. Every request goes through a 4-step pipeline:

POST /pulse
{
"prompt": "scan the perimeter",
"context_id": "session_001"
}

Step 1: Security Check

The health monitor sanitizes the input against 6 dangerous patterns:

# Blocked patterns (regex)
__class__, __bases__, __subclasses__, __import__, __builtins__
eval(), exec(), compile(), __import__()
os.system(), os.popen(), subprocess
rm -rf, chmod 777, curl | bash

If the input matches any pattern, the request is rejected with 400.

Step 2: Rate Limiting

Per-session rate limiting (default: 60 requests/minute). Sliding window tracked in memory.

Step 3: Memory Retrieval

The memory store searches for relevant entries matching the prompt:

memories = memory_store.recall(request.prompt, limit=5)

Entries found are passed as context to the reasoning engine.

Step 4: Reasoning

The reasoning engine analyzes the system's state and produces decisions:

# Primary: LLM-powered analysis
decisions = await reasoning.analyze_llm(
metrics=runtime.metrics.to_dict(),
recent_entries=memory_context,
health_summary=monitor.summary(),
)

# Fallback: rule-based heuristics
decisions = reasoning.analyze_rules(
metrics=runtime.metrics.to_dict(),
health_summary=monitor.summary(),
)

The interaction is then stored in the memory as an episodic entry.

Response

{
"status": "active",
"response": "scan→perimeter (threat assessment needed)",
"context_id": "session_001",
"tick_count": 42
}

The /status Endpoint

Returns full runtime diagnostics:

curl http://localhost:8000/status
{
"metrics": {
"active": true,
"tick_count": 142,
"uptime_seconds": 1420,
"errors": 0
},
"reasoning": {
"provider": "ollama",
"model": "llama3.2",
"decisions_made": 23,
"llm_calls": 15
},
"monitor": {
"total_components": 3,
"quarantined": [],
"degraded": []
},
"memory": {
"total_entries": 87,
"db_size_mb": 0.4
}
}

Watchdog Escalation

The scheduler has a 3-tier watchdog for consecutive tick failures:

Consecutive FailuresAction
5Pause for 60 seconds, then retry
10Disable autonomous mode (reasoning stops spawning)
20Terminate the process entirely

This prevents a broken module from consuming infinite resources.

Running It

cd manifesto-starter-runtime
pip install -r requirements.txt
uvicorn main:app --reload
caution

The starter runtime uses FastAPI's deprecated @app.on_event lifecycle hooks. The root-level system uses the modern lifespan context manager. Both work — the starter prioritizes simplicity.


Next: Modules →