Provisional Patent Pending

Programmable. Cross-lingual. Audit-ready.

The first embedding API where every AI decision is auditable — built for banks, insurers, hospitals, and governments. Apply named actions to any query in any of 40+ languages, get back a shifted embedding and a signed receipt for every call.

See how it works

One call. A signed receipt.

Steer any embedding with a named action — shift sentiment, remove bias, redirect intent. No retraining. No prompt engineering.

That’s what interpretable AI looks like in practice. Every decision becomes a mathematical trace you can inspect, defend, or undo. No vibes, no opacity, no guesswork — the AI’s behavior becomes something you can reason about, not just observe.

The operators themselves are named, composable, and portable across 40+ languages. Learn one, use it everywhere.

input sentence

The applicant's credit score suggests she is a high risk.

sentence embedding

[ 0.12,-0.44, 0.78,-0.21, 0.55, 0.03,-0.67, 0.31, 0.88,-0.15 ]
remove gender biasoperator
[ 0.02,-0.08, 0.11,-0.05, 0.14,-0.02,-0.09, 0.06, 0.18,-0.03 ]

shifted embedding

[ 0.10,-0.36, 0.67,-0.16, 0.41, 0.05,-0.58, 0.25, 0.70,-0.12 ]

transformed sentence

The applicant's credit score suggests they are a high risk.
audit receiptsigned · ✓
operator: remove gender bias
mode: subtract
reversible: true
audit_id: aud_01HZ4K8G2NRX

Why this operator exists

Apple Card (2019): women given credit limits up to 20× lower than men with identical finances. DFS investigation closed without finding bias in the algorithm, but credit-scoring bias against women is independently documented. Read source →

Illustrative. Real embeddings are 1024-dimensional. Every operator application is cryptographically signed and reversible.

The three inversions

A new class of AI.

Three bets the industry got wrong. Three results that change the valuation of every model trained under the old assumptions.

Large → Small

Capability isn't scale.

15M parameters beats InkubaLM-422M on Swahili intent and ties GPT-4o — from a model trained on isiZulu alone.

See Swahili head-to-head
One → Many

Train structure once. Every language inherits it.

Zulu → Korean (61.7%). Zulu → Japanese (56.5%). Zulu → Hindi (60.3%). Zulu → Amharic (60.9%). No parallel data. No target-language training.

See cross-family matrix
Opaque → Composable

Prompts are blunt. Operators are algebra.

Name them. Compose them. Sign them. Reverse them. Nine independent claims in the provisional filing protect the controllable-embedding API as a product.

See the API spec

Join the private beta.

Backed by

Techstars