The three inversions

A new class of AI.

Three bets the industry got wrong. Three results that change the valuation of every model trained under the old assumptions.

Large → Small

Capability isn't scale.

15M parameters beats InkubaLM-422M on Swahili intent and ties GPT-4o — from a model trained on one language in one epoch on a laptop.

See Swahili head-to-head
One → Many

Train structure once. Every language inherits it.

Zulu → Korean (61.7%). Zulu → Japanese (56.5%). Zulu → Hindi (60.3%). Zulu → Amharic (60.9%). No parallel data. No target-language training.

See cross-family matrix
Opaque → Composable

Prompts are blunt. Operators are algebra.

Name them. Compose them. Sign them. Reverse them. Nine independent claims in the provisional filing protect the controllable-embedding API as a product.

See the API spec

One call. A signed receipt.

Steer any embedding with a named action — shift sentiment, remove bias, redirect intent — then read a cryptographic record of exactly what was applied. No retraining. No prompt engineering. Every intervention reversible.

POST /v1/embeddings/shift
curl https://api.bhala.ai/v1/embeddings/shift \
  -H "Authorization: Bearer $BHALA_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "I want to cancel my subscription",
    "lang": "en",
    "operators": [
      { "id": "sentiment_positive", "alpha": 1.0 }
    ]
  }'

# → {
#   "embedding": [ ... 128 floats ... ],
#   "operators_applied": [
#     { "id": "sentiment_positive", "alpha": 1.0,
#       "shift_norm": 0.431, "latency_ms": 23 }
#   ],
#   "model": "sci-v3",
#   "audit_id": "aud_01HXX..."
# }

Join the private beta.

Backed by

Techstars