Skip to main content

The Tick Cycle

Every 10 seconds, the runtime executes a complete processing cycle. This page documents the full runtime behavior implemented in runtime.py (393 LOC).

The 5 Phases

┌──────────┐    ┌──────────┐    ┌────────────┐    ┌───────────┐    ┌───────────┐
│ Reflexes │ → │ Events │ → │Maintenance │ → │ Telemetry │ → │ Reasoning │
│ @every │ │ @on │ │prune+merge │ │ metric log│ │ think() │
└──────────┘ └──────────┘ └────────────┘ └───────────┘ └───────────┘
every tick every tick every 10th every 30th every 5th

Phase 1: Reflexes

The runtime scans all registered reflexes and fires any whose schedule has elapsed.

Schedule-based (@every):

# If enough time has passed since last fire, execute
if now - reflex.last_fired >= reflex.interval_seconds:
self._fire_reflex(reflex)

Interval parsing:

InputSeconds
30s30
1m60
5m300
1h3600

Auto-scanning: On boot, the runtime scans all .sov files in pipelines/ for @every and @on comment directives and registers them automatically.

Phase 2: Events

Events emitted by pipelines or the API are queued and processed each tick.

def emit_event(self, event_name: str, data: dict | None = None):
self._event_queue.append({"event": event_name, "data": data or {}})
self.metrics.events_emitted += 1

Matching @on reflexes are fired when their event name matches:

for reflex in self._reflexes:
if reflex.trigger_type == "on" and reflex.trigger_value == event["event"]:
self._fire_reflex(reflex)

Phase 3: Maintenance (Every 10th Tick)

Data maintenance — compresses and prunes stored entries:

  1. Consolidation — compress old episodic entries into semantic knowledge
  2. Decay — apply importance decay to unaccessed entries
consolidated = self.memory.consolidate(max_age_hours=72)
decayed = self.memory.decay(factor=0.98)

Entries below 0.05 importance after decay are deleted permanently.

Phase 4: Telemetry (Every 30th Tick)

The runtime writes its own metrics to the memory store:

self._store(
f"Runtime metrics: tick={self.metrics.tick_count}, "
f"pipelines={self.metrics.pipelines_executed}, "
f"errors={self.metrics.errors}",
tags=["telemetry", "metrics"],
importance=0.3,
)

This creates a temporal record of system health over time, queryable by the reasoning engine or external tools.

Phase 5: Reasoning (Every 5th Tick)

The reasoning engine analyzes the runtime's state and makes autonomous decisions:

decisions = self.reasoning.analyze(
metrics=self.metrics.to_dict(),
recent_entries=[...],
health_summary=self.monitor.health_summary(),
)

Decisions are action objects (e.g., spawn, scan, consolidate, quarantine). The runtime executes them immediately — spawning new pipelines, adjusting parameters, or quarantining unhealthy components.

Runtime Metrics

The RuntimeMetrics dataclass tracks the system's full state:

@dataclass
class RuntimeMetrics:
active: bool = False
tick_count: int = 0
started_at: float = 0.0
last_tick: float = 0.0
pipelines_executed: int = 0
entries_consolidated: int = 0
entries_pruned: int = 0
reflexes_fired: int = 0
decisions_made: int = 0
quarantined: int = 0
events_emitted: int = 0
errors: int = 0
uptime_seconds: float = 0.0

Reflex Registration

Via API

curl -X POST http://localhost:8000/engine/reflex \
-H "Content-Type: application/json" \
-d '{"pipeline_name":"scanner","trigger_type":"every","trigger_value":"5m"}'

Via Comment Directive

// @every("30s")
pipeline status_logger {
cx_remember("Status check", ["telemetry"], 0.2)
}

Via Event Trigger

// @on("pipeline_failed")
pipeline failure_handler {
print("A pipeline failed!")
}

Pipeline Execution

When a pipeline is executed (via reflex or API), the runtime:

  1. Loads the .sov source from pipelines/
  2. Compiles it through sovereign_lang (lexer → parser → codegen)
  3. Runs sandbox analysis (3-layer security check)
  4. Executes the generated Python in a restricted environment
  5. Records health outcome via the monitor
  6. Processes any events emitted by the pipeline

Next: Sovereign Script →