Belief-Aware
Language Models

Current AI produces fluent text without any concept of whether that text is true. It predicts what is plausible, not what is real. We are building architectures that give AI a principled, continuous degree of belief in every assertion it makes.

BALM introduces a Belief Head — a parallel neural network that produces a degree of belief on a continuous scale from −1 to +1 for every token. Combined with Bayesian continual learning, this enables AI that genuinely knows what it knows, and knows what it doesn't.


Architecture — BALM
01
Transformer
Language
Head
Belief
Head

The Belief Head

A parallel output head trained alongside the standard language head. One predicts the next token. The other produces a continuous degree of belief in the factual assertion that token represents.

02
Microplastics are present in tap water
+0.95
Microplastics disrupt endocrine function
+0.55
Microplastics cause cancer in humans
+0.10
↑ Same topic, different assertions, different degrees of belief

Sentient Tokens

A new output primitive. Every token pairs its text with a degree of belief and an epistemic uncertainty measure. AI output becomes a stream of self-annotated assertions, not undifferentiated text.

03
Prior
yesterday's
posterior
×
Likelihood
new
evidence
Posterior
today's
belief
↻ yesterday's posterior becomes today's prior

Bayesian Continual Learning

Model weights are probability distributions, not point estimates. New evidence updates the posterior without catastrophic forgetting. The model genuinely learns over time rather than remaining frozen at its training cutoff.


Architecture — SABER

Sentient Adaptive Belief-Energy Reasoner

BALM scores each claim independently but cannot enforce global coherence across interconnected claims. An energy-based reasoning layer can enforce constraint satisfaction but has no concept of epistemic confidence — it treats all constraints equally.

SABER unifies both. The Bayesian layer tells the system how confident it is in each thesis. The energy layer ensures the overall state is globally consistent with those confidences. High-belief claims become hard constraints. Low-belief claims become soft constraints the system can override if coherence requires it.

The two layers are not an ad-hoc combination — they are mathematically dual. A belief of +1 maps to zero energy (fully consistent). A belief of −1 maps to infinite energy (fully inconsistent). SABER exploits this duality to build systems that know what they believe and ensure those beliefs are internally consistent.

Layer 1 — Bayesian (BALM)
Transformer backbone → Belief Head
Produces degree of belief per assertion
Outputs conviction map
Bridge
Convictions → constraint weights
High belief = hard constraint
Low belief = soft constraint
Layer 2 — Energy (EBRM)
Evaluates full state for coherence
Gradient descent toward consistency
Enforces global constraint satisfaction
Coherent, belief-weighted output

−1.0 Refuted 0.0 Undetermined +1.0 Established
Research Domains
Search

Information Retrieval

Every assertion annotated with a degree of belief derived from source credibility hierarchies. Government databases, peer-reviewed journals, and social media posts carry different epistemic weight.

Medical

Clinical Evidence

The model understands evidence hierarchies — a meta-analysis from NEJM carries a different degree of belief than an influencer's health blog. Conflicting claims are flagged, not hidden.

Finance

Market Intelligence

Signal conviction maps naturally to degrees of belief. Macro forecasts grounded in consistent data register high belief. Contrarian theses on thin evidence register lower — transparently.


Research

The BALM Architecture

Dual-head design, Sentient Tokens, and why Bayesian weight distributions change how AI processes knowledge.

Belief-Aware AI Across Domains

How degrees of belief apply differently to search credibility, clinical evidence, and financial signal analysis.

Building with BALM

The API, the response schema, and what belief-annotated intelligence looks like in practice.