FAQ
Short, precise answers. If you want the long‑form philosophy, ask us directly — we keep this page practical.
What is “Aware NN”?
“Awareness‑first” means we design models that can form and use a self‑model — a structured understanding of their own limits, beliefs, and uncertainty — and make plans that respect those limits. Think reasoning about self‑state, not just bigger autocomplete. Safety is baked in by construction via hard constraints and testable benchmarks.
How will you deploy?
Stage‑gated releases with audit trails. Investor‑tier models first, then private on‑prem nodes for partners. Each delivery is tuned for purpose and wrapped with strict data‑scope controls, red‑team tests, and rollback plans.
Where do you start?
Two‑country lab plan with off‑grid capable sites. Each lab runs >1,000 agents in recursive training loops using extreme synthetic corpora + carefully curated human data. Success is measured against explicit awareness benchmarks, not vibes.
What’s the trust doctrine?
Zero‑tolerance for breaches. Military‑grade isolation, contractual teeth, and culture that treats personal data as sacred. We would rather lose a contract than compromise a client.
Investor access
Request the one‑pager and letter. Private deck on request. We limit noise and open access in phases.
What kind of data do you use?
We rely on extreme synthetic datasets and curated human corpora. No surveillance scraping. Every dataset is documented, tagged, and run through safety filters.
How is governance handled?
A layered model with aware AI as a transparent arbiter—not a ruler. It assembles evidence trails, drafts options, and exposes uncertainty, while humans decide (citizen assemblies, juries, elected boards). Every action is auditable, contestable, and bound by scope, with opt‑out parity for analog processes.
Will AI replace judges or officials?
No. Our deployments position AI as a truth‑first mediator: it tests hypotheses, flags bias, and presents explainable options with documented evidence. Humans retain final authority through juries, councils, and appeals. This same pattern supports volunteer micro‑society pilots (co‑ops, HOAs, city labs) for small‑claims, participatory budgeting, and procurement oversight.
What about ethics?
We treat dignity and privacy as non‑negotiable. Our secrecy doctrine ensures that personal or partner data cannot be exploited or commodified.
How do you scale safely?
Recursive loops and checkpoints prevent uncontrolled growth. Scaling happens in controlled environments with rollback plans and live metrics.
Can partners build on your tech?
Yes. Partners can license tuned models or run private nodes. Every deployment is scoped and contract‑bound for purpose and safety.
What’s your timeline?
Near‑term: lab buildout and investor tier models. Mid‑term: private partner deployments. Long‑term: scaling global nodes under strict governance.
Will the public get access?
Eventually. Public‑facing agents will follow once safety, governance, and awareness thresholds are validated. Controlled, not rushed.