The Sovereign Question of Artificial Intelligence
On what it actually means to build AI in Canada, by Canadians.
Canada is having a quiet conversation with itself about AI sovereignty.
Most of the conversation is happening in the wrong rooms. In government briefings, in policy papers, in panel discussions at events where the word "sovereignty" is said so often it starts to lose its meaning. What is almost never asked is the simpler question underneath all of it, and I want to ask that here.!
What does sovereignty actually mean when the thing in question is intelligence itself?
I have been thinking about this for a while, from an unusual position. I live on Cape Breton Island, at the eastern edge of the country, far from the Toronto-Montreal-Vancouver corridor where Canadian AI is usually discussed. I run an AI research company here called Synexiom Labs. My brother and I have spent the last year building something we call the Wisdom Architecture — a reasoning system designed around the principle that an intelligent machine should know what it does not know. We call the product Cortexiom. We call the research program RAW Intelligence — Reflective, Aware, Wise.
I'll come back to what we're building. But first, the bigger question. Because if Canada gets the sovereign question wrong, the thing I'm building, and the thing anyone here is building, will not matter.
Three layers, not one
When most people say "sovereign AI," they mean one specific thing: compute infrastructure.
Data centres owned by Canadians, on Canadian land, serving Canadian customers. That is the version of sovereignty that shows up in federal funding announcements. It is the version driving hundreds of millions of dollars of public and private capital right now.
It is also, by itself, insufficient.
Sovereignty in AI is not one thing. It is at least three things, stacked on top of each other, each of which can be sovereign or not sovereign independently of the others. You can have one without the others, and when you do, you have something that looks like sovereignty from a distance but is not.
The first layer is infrastructure sovereignty. The physical compute. The GPUs, the data centres, the energy that powers them, the networks that connect them. This is the layer most of the current conversation lives in.
The second layer is data sovereignty. Where the training data comes from, who has rights to it, where it lives, who can audit it, whether the people whose lives and languages and knowledge are reflected in it had any say in its use. This is a harder layer to see because data flows invisibly, but it matters more than the physical layer for anything that shapes public reasoning.
The third layer — and this is the one almost nobody is talking about — is epistemic sovereignty. The values and assumptions baked into how the AI reasons. What it treats as certain. What it treats as uncertain. What it considers a good answer. Whose worldview is embedded in its training signal. Whether it speaks in the cadence of a particular culture and geography or flattens everything into the standard voice of the internet.
All three layers matter. They matter in a specific order. Without infrastructure sovereignty you are a tenant on someone else's land. Without data sovereignty you are renting access to your own history. Without epistemic sovereignty you are a country whose thinking tools were designed to think like someone else.
Most of the current Canadian AI sovereignty conversation is focused entirely on the first layer. That is like worrying about who owns the printing press while the books it prints are written in a foreign language.
What we already owe
Before we go further, a thing worth saying plainly.
The last fifteen years of AI were built by hyperscalers. Amazon, Microsoft, Google, Meta, and a handful of others built compute infrastructure at a scale no government had the appetite to attempt. They trained the models that this entire conversation sits on top of. They made inference cheap enough that a one-person company on Cape Breton Island can run a research lab. Much of what I build depends on infrastructure they made possible. I am not going to pretend otherwise.
The question is not whether hyperscalers should exist — they do, and they have built something extraordinary. The question is whether Canada's AI future is only hyperscaler-shaped, or whether something else is allowed to exist alongside it.
Those are different questions. The first has an obvious answer. The second is the one Canada has not quite asked yet.
The layer that is missing
A hyperscaler data centre, wherever it lands, is a remarkable thing. It is also a specific thing. It is compute optimized for serving models designed elsewhere, trained on data sourced globally, reasoning with values shaped by markets that are not primarily Canadian. That is not a criticism. It is a description. A hyperscaler data centre is what it is — a marvel, and not a sovereignty.
What Canada does not yet have, in any meaningful quantity, is a second kind of AI infrastructure. Canadian-owned compute running Canadian-designed models trained on Canadian-stewarded data, reflecting Canadian reasoning at all three layers. That kind of infrastructure exists nowhere yet, not really — not in the Toronto corridor, not in Montreal, not in the Maritimes. A few pieces exist. Nobody has put them together.
This is the gap. And the gap is interesting, because it does not compete with hyperscalers. It complements them. The hyperscalers handle global-scale general-purpose inference. Sovereign infrastructure handles the things that need to be sovereign: public sector reasoning, regulated industries, regional knowledge systems, Indigenous data, decisions where the values embedded in the model are part of the product.
Canada is going to have both. The question is whether the second kind actually gets built, or whether the word "sovereignty" becomes a marketing layer on top of the first kind.
The by-Canadians-for-Canadians test
There is a simple test for whether a Canadian AI project is sovereign, and I want to offer it here because I have not seen it stated plainly anywhere else.
For any AI system built in Canada, ask four questions:
- Does the infrastructure that runs it stay in Canada, under Canadian control, over the long term?
- Does the data it was trained on, and the data it generates, remain under Canadian stewardship, with clear obligations to the communities whose information it reflects?
- Do the values encoded in how it reasons come from a Canadian intellectual tradition — or at minimum, are those values legible and contestable by Canadians?
- If the project disappeared tomorrow, would Canada retain anything — infrastructure, knowledge, capability — that makes the next one easier to build?
Most Canadian AI projects, held to these four questions honestly, would fail at least one. Many would fail three. A foreign-owned data centre on Canadian soil fails all four. A Canadian startup running on American clouds with training data scraped globally fails three. A Canadian research lab publishing to open weights that are then fine-tuned by American labs fails two.
I am not sure my own company, Synexiom Labs, passes all four today. We run inference on third-party cloud infrastructure. We use a Canadian reasoning architecture, trained on our own framework, serving Canadian-first use cases — but the compute underneath us is not ours yet. This is the gap I am trying to close. It is the gap the country is trying to close. It is worth being honest that almost no one has closed it yet.
This is the strange thing about sovereignty. It is always already partial. It is a thing you are always in the middle of building, never finished — so every answer is also a new question.
Sovereign, but not entirely. Not entirely, but genuinely. Both at once. This is not a failure of the concept. It is the shape of the concept.
Where Cape Breton comes in
I am writing this from Cape Breton Island because this is where I live, but also because Cape Breton is one of the places in Canada where the sovereign AI question is about to become a real choice rather than a theoretical one.
The Strait of Canso corridor has industrial-scale energy capacity that was originally built for a different century's industry. Deepwater port access. Maritime air that reduces cooling costs. Industrial land available. World-class wind generation potential. These are real, physical, rare assets — the kind that attract serious compute infrastructure investment. Some of that investment will be hyperscaler-led, and some of it should be. Hyperscaler compute on Cape Breton would be a meaningful economic development win for a region that needs it.
But if hyperscaler compute is the only thing that gets built here, the island will have traded one kind of valuable asset for another kind of valuable outcome, and the sovereign layer — the Canadian-owned, Canadian-designed, Canadian-shaped layer — will have missed its moment.
The story we want to tell ourselves about this region is not extraction and not protest. It is a more interesting story. It is the story of a place that, in the same moment, hosts world-class hyperscaler compute serving global AI and builds sovereign Canadian compute running Canadian reasoning. Both. At the same time. Not competing, complementary. The extraordinary and the specific, living next to each other on the same stretch of coast.
This is what we are working on, in our small way. Synexiom Labs is building the reasoning layer — the Wisdom Architecture, the Cortexiom product. The infrastructure layer is a larger question, one that no single company can answer alone. It will take a consortium of regional economic development partners, academic institutions, Indigenous communities, energy operators, and public and private capital working together with a shared conviction that the second kind of AI infrastructure is worth building. Those conversations are beginning. None of this is built yet. All of it is buildable.
What we are actually building
I started Synexiom Labs because I thought current AI systems had a specific problem: they are confident when they should be uncertain. They pattern-match the most plausible answer and deliver it with the same tone of voice whether the answer is right or catastrophically wrong. For a lot of use cases, this is fine. For the use cases that actually matter — medical decisions, ecological forecasting, policy, capital allocation, any decision where being confidently wrong is worse than being honestly uncertain — it is dangerous.
The Wisdom Architecture is our attempt to build AI that reasons the way a careful human actually reasons. It has five layers. It observes before it concludes. It generates hypotheses rather than answers. It checks itself for contradictions. It calibrates how confident it should be. It reflects on its own reasoning before committing. In our benchmarks, this architecture outperforms standard approaches on reasoning-heavy tasks, including exceeding PhD-level baselines on graduate biology questions. A patent is pending. A research collaboration with a Canadian university is underway.
Cortexiom is the first product expression of that architecture. It is in market today. People are using it to think through hard problems — strategic questions, ethical dilemmas, decisions where the answer is not clear and where a normal AI's confident reply would be unhelpful or wrong.
This is the first contribution we are making to what sovereign AI from Canada could look like — the reasoning layer. Other people, in other places, are working on other layers. None of us can build the whole thing alone. All of us are building the same thing, if the thing actually gets built.
The choice
Canada is going to build AI infrastructure in the next five years. Hundreds of millions of dollars are already in motion. The question is not whether; it is what kind, and in what proportions.
The easy path is to let the infrastructure conversation be led entirely by the parties with the most capital and the most momentum — which will produce excellent hyperscaler compute on Canadian soil, and call that sovereignty, and leave the other two layers unanswered. This path will produce buildings, jobs, press releases, real economic value. It will not produce sovereignty in any meaningful sense. A decade from now, Canada will have data centres on its soil that it does not control, serving models it did not build, trained on data it did not steward, reasoning with values it did not shape.
The harder path is to build across all three layers. Canadian infrastructure running Canadian reasoning on Canadian data, alongside the hyperscaler layer, not instead of it. Not everywhere at once. Not perfectly. Starting small, in specific regions, with specific projects, by specific people who understand that sovereignty is not a marketing word but a multi-decade commitment to owning what you build.
Sovereignty is always partial. It is always a thing you are in the middle of. That is not a reason to not begin. That is the only way it is ever begun.
I am on the edge of the country, building one small piece of one layer of this. I am writing this because I believe Canada has about eighteen months to decide how it is going to answer the sovereign question, and because the harder path is still open but will not remain so for long.
We can build this. The question is whether we will.
Meghraj Solanki is the founder of Synexiom Labs and writes at The Grey Analogue. He is based in Cape Breton Island, Nova Scotia.