Lightweight Local Intelligence Layer

16 Total Brains: Site Operator, 0meg4kAI, Central Command, and 13 Cabinet Executive Brains.

Each brain is a scoped local persona module over the included company knowledge base. This is intentionally light: no GPU required, no paid API required, and no fake autonomous claims.

16 brains loaded locally

Select a brain

Loading local brain profiles...

Ask the active brain

This uses local retrieval and cabinet scoping. The answer stays inside the selected executive's lane unless you choose the Central Company Command Brain.

Retrieved proof sources

Operating truth

These are not 14 heavy AI models. They are 14 lightweight local brains: one router and 13 scoped executive assistants powered by local JSON knowledge. That keeps the site deployable as a static package while leaving a path to plug in Ollama, llama.cpp, or another OpenAI-compatible local endpoint later.