The Four Starter Modules
Every SovCore runtime is built from modules — self-contained components that handle a specific function. The starter kit ships with four, implemented in the modules/ package.
# modules/__init__.py — what's exported
from .scheduler import Scheduler, RuntimeMetrics
from .reasoning import ReasoningEngine, Decision
from .monitor import HealthMonitor, HealthRecord
from .memory import MemoryStore, Memory, MemoryType
⏱ Scheduler
File: modules/scheduler.py (234 LOC)
The scheduler is the master clock. It owns all other modules and drives the tick cycle.
Initialization
runtime = Scheduler(name="starter", tick_interval=10.0)
# Creates: runtime.memory, runtime.reasoning, runtime.monitor
Runtime Metrics
@dataclass
class RuntimeMetrics:
active: bool = False
tick_count: int = 0
started_at: float = 0.0
last_tick: float = 0.0
errors: int = 0
decisions_made: int = 0
entries_consolidated: int = 0
entries_pruned: int = 0
uptime_seconds: float = 0.0
The 4-Phase Tick Cycle
Each tick executes:
- Monitor sweep — check component health
- Memory maintenance — consolidate and prune entries (every 10th tick)
- Reasoning analysis — rule-based + optional LLM logic (every 5th tick)
- Telemetry — log runtime metrics to memory (every 30th tick)
Watchdog Escalation
| Consecutive Failures | Action |
|---|---|
| 5 | Pause for 60 seconds, then retry |
| 10 | Disable autonomous mode |
| 20 | Terminate the process |
⚙️ Reasoning Engine
File: modules/reasoning.py (343 LOC)
Hybrid decision engine with automatic provider detection.
Decision Dataclass
@dataclass
class Decision:
action: str # spawn, scan, consolidate, adjust, explore, ignore
target: str = "" # what to act on
reason: str = "" # why
confidence: float = 0.0
source: str = "rules" # "rules" or "llm"
Decisions below 0.6 confidence are discarded (ReasoningEngine.CONFIDENCE_THRESHOLD).
LLM Provider Detection
class LLMProvider(Enum):
AUTO = "auto" # Detect best available
OLLAMA = "ollama" # Local, free
NVIDIA = "nvidia" # Free-tier cloud
OPENAI = "openai" # Paid cloud
Auto-detection order: Ollama → NVIDIA → OpenAI → rule-based fallback.
Rule-Based Heuristics
When no LLM is available, the engine applies these rules:
- Error threshold → if
errors > 5, suggest a system scan - Quarantine check → if components are quarantined, investigate
- Idle detection → if high uptime + low activity, suggest exploration
- Memory growth → if entries growing fast, suggest consolidation
- Uptime milestone → if uptime exceeds thresholds, log achievement
See LLM Providers → for full LLM configuration.
🔒 Health Monitor
File: modules/monitor.py (145 LOC)
Health Scoring Math
QUARANTINE_THRESHOLD = 3 # consecutive failures to quarantine
HEALTH_DECAY = 0.85 # multiplier on failure
HEALTH_RECOVER = 1.02 # multiplier on success
# Score starts at 100.0
# Success: score = min(100.0, score * 1.02)
# Failure: score = max(0.0, score * 0.85)
Health Record
@dataclass
class HealthRecord:
name: str
score: float = 100.0
successes: int = 0
failures: int = 0
consecutive_failures: int = 0
quarantined: bool = False
quarantined_at: float = 0.0
Input Sanitization
6 regex patterns block dangerous input at the API boundary:
__class__, __bases__, __subclasses__, __import__, __builtins__
eval(), exec(), compile()
os.system(), os.popen(), subprocess
sys.exit()
rm -rf, chmod 777, curl | bash
Rate Limiting
Sliding window — 60 requests/minute per key. See Security → for full details.
💾 Memory Store
File: modules/memory.py (374 LOC)
Three Memory Types
class MemoryType(Enum):
EPISODIC = "episodic" # Events, interactions, experiences
PROCEDURAL = "procedural" # How-to knowledge, procedures
SEMANTIC = "semantic" # Facts, distilled knowledge
Memory Structure
@dataclass
class Memory:
id: str
type: MemoryType
content: str
tags: list[str]
importance: float # 0.0–1.0
created_at: float # timestamp
last_accessed: float # timestamp
access_count: int
source: str # who created it
metadata: dict
Consolidation
Compresses old episodic entries into semantic knowledge:
memory_store.consolidate(max_age_hours=72)
# Groups old episodic entries by tags
# Creates semantic summaries
# Deletes the original episodes
Importance Decay
memory_store.decay(factor=0.98)
# importance *= factor * (1 + log(access_count + 1))
# Entries accessed recently decay slower
# Below 0.05 importance → deleted
Identity Tag Immunity
Entries tagged with identity are exempt from decay — they persist indefinitely. Used for core configuration data that defines the runtime's behavior.
Safety Limits
MAX_DB_SIZE_MB = 100 # Raises StorageFullError if exceeded
MAX_MEMORY_COUNT = 500 # Auto-prunes least important when exceeded
Adding More Modules
See Contributing → for the module interface contract and step-by-step guide.
Next: Tick Cycle →