Why does life exist?
Not philosophically—physically. How does life emerge and persist in a universe that trends toward entropy?
Turns out the answer lives at the intersection of thermodynamics and information theory. Life isn't the opposite of entropy—it's a process that efficiently dissipates energy in order to persist informational structure. Memory, learning, metabolic efficiency, adaptation—these aren't metaphors. They're thermodynamic necessities that drive self-perpetuating life.
To prove the math, we built a toy-model and run a benchmark. The proof-of-concept was meant to show that an AI could consume and organize information the way physics and biology do—persistently, relevantly, and across timescales.
We ran the Oolong long-context benchmark using gemini 2.5 and our toy-model. It came back at 1m tokens with 100% coherence. We assumed this was an error. All models degrade to under 50% (worse than random) shortly after 100k tokens.
So we reran it. Same result. No coherence loss. We expanded to every public benchmark we could find—11 total, the most demanding tests of long-context coherence, needle-in-a-haystack, and memory.
Contextful beat every single frontier model in 11 benchmarks.
~70%
Less Tokens
50 - 75%
Faster
33 - 80%
Less Cost
We've decided to stop everything else. Drop our other startups and focus full time on this.
This will change how AI works. It provides contextual relevance layer that allows any LLM to maintain long-term coherence, unlimited memory, and compound learning over time.
Contextful solves the most significant outstanding problems for AI today.
LLM's are Stateless
All problems stem from this: context rot, memory retrieval, latency, token inefficiency, hallucinations, fickle reasoning, drifting, and no learning.
We are building ContextOS an operating system for stateful AI (where LLMs are like replaceable CPUs):
ContextOS:
Sits in between users/agents on the front end, and the LLMs, APIs, Websites, and other ML models on the backend.
• Manages state, relevance, short/long-term memory, and input/output.
Breaks up prompts into shards, recontextualizes, and routes context, data, and actions to different components based on predicted relevance and outcomes.
It adapts to each user/agent and to each component by exploring and learning what works best for each context and orchestrating them. This compounds.
We believe we have solved “relevance” through information theory. We copied what makes life’s inference so thermodynamically efficient and how it organizes on different hierarchies and time scales.