● Manifesto Essay · 8 min read April 2026

The world does not have a data problem.
It has a context problem.

An essay on why the next decade of enterprise AI will be won at the context layer — and why that layer is the only moat that compounds.

Every organization already knows what it needs to know. The answer to your most expensive question exists, right now, somewhere inside your systems — in a decision made last quarter, in a hypothesis tested and forgotten, in a Slack thread nobody searched, in a SQL query someone wrote six months ago and never documented.

The problem was never data. You have enough data. Every enterprise has spent the last fifteen years accumulating it. Data lakes. Data warehouses. Data catalogs. Data governance. Data operations. The word "data" has been bolted onto every noun in enterprise software, and yet, when your Head of Product asks on Monday morning why did our card activation drop twelve percent, the answer still takes five days to arrive.

The problem is the gap. The gap between what your organization knows and what your leaders can access in the moment it matters. It's a gap that costs you millions, monthly, quietly, compounding. It's a gap nobody's balance sheet names. And it's the gap AntHill was built to close.


I

Intelligence has been solved. Context has not.

Something changed in 2024. Reasoning models crossed a threshold. GPT, Claude, Gemini — they became capable of multi-hop inference over messy, unstructured organizational data at a quality that genuinely matters. The extraction layer — the ability of AI to read, summarize, and reason — is effectively solved. It is, more importantly, commoditizing. Every quarter the models get better, cheaper, and more available. By 2028, the question of which model will matter about as much as the question of which CPU matters to a backend engineer today. Which is to say: not much.

What none of these models have — and what no model provider can give them — is your organization's context. They don't know what your GTV means: specifically, which tables, which filters, which edge cases your analytics team agreed on last October. They don't know that the metric definition changed. They don't know about the engineering incident that explains last month's drop. They don't know who decided what, when, or why. They don't know the dozen little rules that only live in the heads of three people on your team, and they certainly don't know the rules that live only in the muscle memory of a Slack thread that got auto-archived last month.

Intelligence without context is just confident guessing. And in enterprise decision-making, a confident wrong answer is worse than no answer.

This is the dirty secret of the AI-for-work wave. Every vendor is selling more intelligence. Nobody is selling more context. And so, in boardrooms and Slack channels across fintech, banking, and payments, a familiar pattern is playing out: an expensive AI tool gets rolled out, it produces plausible-sounding output, somebody on the analytics team catches a hallucination, trust collapses, and the tool becomes another license line nobody cancels but nobody uses.

The tool wasn't bad. The model wasn't bad. It just didn't have the context.


II

The colony is the metaphor. It's also the mechanism.

Ants don't have managers. They have context.

A single ant is not very intelligent. A colony is unreasonably powerful — capable of coordination, logistics, and adaptive problem-solving that would take a human team weeks to plan. The colony's intelligence isn't in any one ant. It's in the substrate they share: the pheromone trails that encode "this path works," the chemical memory that survives any individual ant's death, the distributed context that lets every worker act with full awareness of what the colony has already learned.

This is not a metaphor we chose for marketing. It's the shape of the product.

An enterprise is a colony. Every analyst, every PM, every engineer generates context every day — Slack messages, JIRA tickets, Confluence pages, SQL queries, merged PRs, on-call notes, decisions in all-hands meetings. Most of it is unstructured. All of it is real. None of it connects. Every time a new question arrives, somebody has to reconstruct the relevant subset of that context from scratch — by hand, by memory, by searching five tools, by asking three people, by getting lucky.

Thirty to forty times a week, on a twenty-person analytics team.

The observation that became AntHill

The answer to the question was already in the organization before the analyst started looking. It was in a JIRA ticket merged three weeks ago. It was in an engineering on-call thread from September. It was in a query someone wrote six months ago. The analyst just had no way to find it.

AntHill's thesis is simple, and it's the same thesis evolution ran on the ant colony a hundred million years ago: the intelligence isn't in the workers. It's in the substrate the workers share. Build the substrate, and the intelligence compounds. Don't build it, and every question starts from zero.


III

Why now, and why the window is narrow.

There is a specific window — roughly twenty-four to thirty months — in which a category-defining, independent, multi-model context layer can be built. That window is open now. It was not open in 2023, and it will not be open in 2029. This is not a coincidence. It is a convergence of three forces that had to arrive at the same time.

The first force: models got good enough. In 2022, asking an LLM to do multi-hop reasoning over a messy corpus of internal documents was a research project, not a product. In 2024 it became possible. In 2025 it became reliable. In 2026 it became table stakes. The infrastructure the context layer sits on top of — the model itself — finally works.

The second force: enterprise AI budgets became real, and accountable. Boards approved AI spending in 2024 on the promise of productivity. In 2026 they are asking for ROI. "Dashboards about dashboards" does not survive the next budget review. Measurable compression of high-cost workflows does. The first class of AI products that will survive the reckoning is the class that changes how much a company pays its analysts to do archaeology.

The third force, and the most important: frontier labs have not yet moved down-stack. OpenAI, Anthropic, Google will eventually build context layers into their enterprise offerings. They will do it because it is an obvious product extension and because their enterprise customers will demand it. But they haven't yet. The window for an independent, neutral, multi-model context layer — one that is not tied to any single model provider's commercial interests — is the period between now and when the frontier labs ship that extension. Our estimate: twenty-four months from today.

After that, the category is harder to start. Before that, it wasn't possible to start. Which is why we are building it now.


IV

What we believe, and what we refuse to build.

Every company has a theory of its product. Most companies are shy about stating it. We aren't going to be. AntHill is built on three beliefs that shape every product decision, every engineering tradeoff, and every sentence we write.

Belief 01 · Activation over accumulation

The most valuable intelligence in any organization is already inside it — unstructured, disconnected, dormant. The job is activation, not accumulation. We are not going to help you collect more data. You have enough data. We are going to help your existing knowledge become queryable, navigable, decision-ready.

Belief 02 · Analysts deserve better

Decision makers should not need analysts to access truth. Analysts should not spend seventy percent of their time doing context retrieval. The promise of AntHill is not to replace your analytics team. It is to elevate them — into strategy, judgment, and work only humans can do. We consider "replaces analysts" a feature request we will refuse.

Belief 03 · Speed without grounding is dangerous

The goal is never fast answers. The goal is fast, accurate, contextually grounded answers. Those are not the same thing. We will never optimize one at the cost of the other. We were tempted once, early on, to let a plausible inference through without full citation. It was caught before it reached a decision. We won't be in that position again.

And because a product is defined as much by what it refuses to do as what it does, here are three things AntHill will never ship. Not in the next version. Not under customer pressure. Not at all.

We will not sell speed without accuracy. A fast wrong answer is worse than a slow right one. Every AntHill output is grounded in indexed context or it is marked as insufficient. There is no third option.

We will not replace human judgment. AntHill gives decision makers direction — not decisions. The compass points north. The leader still chooses to walk. We enhance judgment; we never substitute it. A version of AntHill where agents act autonomously on consequential decisions is not a more advanced product. It is a more dangerous one.

We will not add complexity in the name of features. Every addition to AntHill must make the decision maker's experience simpler, faster, and more certain. Complexity is the enemy of the compass. If a new capability makes the core workflow harder to navigate, it doesn't ship. Roadmap velocity is not a metric we optimize.


V

What changes if this wins.

Imagine a Monday morning, eighteen months from now, at a mid-market fintech we haven't signed yet.

The Head of Product opens Slack. Card activation is down twelve percent over the weekend. She types the question into AntHill: why? Not because she's too senior to talk to her analytics team. Because it's 9:14 AM and her team is still on coffee.

AntHill decomposes the question into twelve ranked hypotheses. It pulls the context graph. It finds a KYC step-3 validation bug, merged by the engineering team two weeks ago, buried in a JIRA ticket nobody on the product side saw. It validates against the warehouse: ninety-two thousand users affected, $2.1M in GTV exposure. It writes the decision trace to Confluence, cited and permissioned. By 9:30 the Head of Product is on the phone with engineering. By noon the bug is rolled back. By end of day, the bleeding has stopped.

Three weeks of compounding cost, recovered in six hours.

That is one question. On one Monday. At one company. Now multiply it by every diagnostic question that lands on an analyst's desk, thirty to forty times a week, across every mid-market fintech in the US and UK. Then multiply that by the year. Then multiply it by the five years over which the context graph compounds, becoming not just an answer system but the organization's permanent memory — the substrate that survives every analyst who joins and leaves, every PM who rotates, every reorg that scatters a team's tribal knowledge.

That is what we are building. That is what wins if we are right.

Ants don't have managers. They have context. With context, they move mountains.

So do your people. When they have AntHill.


Coda

An invitation, filtered.

We are onboarding a small number of US and UK design partners in 2026. If you run analytics, product, or data at a mid-market fintech and the five-day diagnostic cycle is something you recognize on sight — not as a metaphor, but as your Monday morning — we would like to talk.

We will say no to most of the people who reach out. That is not because we have too many options. It is because the context graph compounds on the quality of the relationship we build with the first ten companies that use it, and we would rather build that relationship right than widely. If you're the right company, you will recognize yourself on the homepage. If you're the right person inside that company, you will recognize yourself in this essay.

If both are true, the button is below. The first ten are where everything starts.

Decisions at thinking speed.
The first ten begin now.

The context graph compounds. By month six, a competitor starting fresh at the same customer starts at zero. The earlier you start, the bigger the moat.

Request design partner access Read the product architecture