AI that works
for people.
A bootstrapped AI-native company on a mission to move the needle toward AI that serves long-term human benefit — through product design, advocacy, and open benchmarking.
AI is one of the most powerful tools ever placed in human hands. Like any tool, it can build or extract, restore or exploit.
We’re not racing to the frontier. We’re a small, bootstrapped team focused on a harder question: how do we ensure AI compounds human capability rather than erodes it? Our work spans product design, ethics advocacy, open benchmarking, and affordable hardware — all guided by a single standard: long-term human benefit.
Product design
Building AI-native hardware and software that is private, affordable, and genuinely useful at home.
Safety & advocacy
Pushing for AI development norms that prioritize long-term human benefit over short-term capability races.
Benchmarking
Open, honest evaluation of AI systems — so consumers and developers can make informed decisions.
Edge hardware
Championing affordable, capable hardware that brings private offline AI within reach of every household.
Meet BART
A warm, wise, private household companion who learns your family, challenges your thinking, and carries your mental load — getting sharper every year. Buy once. Own forever.
RetConText
A novel approach to context management for frontier AI models. The results speak for themselves.
Long-context reasoning,
finally solved.
Context rot is one of the most significant unsolved problems facing frontier AI today. As conversations and documents grow longer, model performance silently degrades — reasoning drifts, earlier context fades, coherence frays.
RetConText sits between you and any model — from the major providers or your own API key — and applies a proprietary context architecture that measurably preserves reasoning quality across long sessions, at depths where other approaches quietly fall apart.
Verified benchmark data releasing soon
LANT
Long-term Aligned Non-Extractive Technology
AI products are benchmarked on capability. LANT benchmarks them on what actually matters — long-term human benefit, ethics, and safety. A unified, evidence-based framework that gives AI-native companies a rigorous platform to demonstrate their human-benefit orientation.
Most AI evaluation today asks a narrow question: how capable is this model? LANT asks the question that should come first — how does this model affect the people who use it, and the world around them?
The framework unifies established ethics methodologies with novel evaluation approaches, applying them to rigorous, demonstrable, evidence-based results. The goal is a benchmark that's auditable, reproducible, and meaningful — not a checklist, but a signal worth trusting.
Unified ethics frameworks
Synthesizes leading AI ethics methodologies — from established academic frameworks to novel LANT-developed criteria — into a single coherent evaluation lens.
Evidence-based results
Scores are grounded in demonstrable, reproducible outcomes — not self-reported claims or opaque internal assessments.
Open & verifiable
The benchmark methodology is public. Any company can submit. Any researcher can audit. Confidence in AI tools requires transparency in how they're evaluated.
Platform for proof
Gives AI-native companies a credible, independent platform to demonstrate their human-benefit orientation — not through marketing, but through evidence.
Dimension weights and methodology releasing with v1.0 specification
From the lab
The Cognitive Cost of Convenience: What Recent Research Says About AI and Our Minds
Studies from MIT, Microsoft, and Harvard are surfacing a troubling pattern: the more we offload thinking to AI, the less thinking we do. But the problem isn't AI — it's how AI is designed.
Read post →The Open Source Question: Who AI Really Belongs To
Open source and edge-deployed AI can put powerful technology in anyone's hands. But access is not empowerment — and the risks are as real as the promise.
Read post →Be first to know.
BART is coming. Be the first to know — and help shape what it becomes.