Owning the Future: A Conversation with Peter Rabley on AI, Digital Infrastructure, and the Battle for Trust

Peter Rabley

Artificial intelligence is no longer a distant prospect in the geospatial world — it’s here, reshaping how we collect, analyze, and act on location-based information. From accelerating disaster response to optimizing supply chains, the potential is immense. But as with any powerful technology, the benefits come with real risks: misinformation, biased algorithms, and data infrastructures that aren’t built for transparency or resilience.  

In this interview, Peter Rabley, CEO of the Open Geospatial Consortium (OGC), offers a candid perspective on AI’s role in the geospatial ecosystem, exploring not only the opportunities it presents but also the foundational work required to ensure these systems serve the public good.

Peter’s premise was provocative: AI’s potential to drive progress in geospatial is enormous — but so are the risks if we don’t address issues of trust, provenance, and sovereignty in the underlying data and infrastructure. That’s where our conversation began.  

In your GeoIgnite 2025 keynote, you asked whether AI will “eat geospatial.” What does that mean?  

It means AI has the potential to absorb geospatial into something you don’t even recognize. Location data could just become an invisible feed into massive models — with the value and decision-making power moving to whoever controls those models rather than the people who curate and govern the data.  

If that happens, the role of geospatial professionals in ensuring accuracy, context, and quality could be diminished or lost. And if AI is making high-stakes decisions — in disaster response, infrastructure planning, climate policy — on faulty or incomplete data, the risks are huge.  

So how do we make sure AI builds on geospatial rather than consuming it?  

We need to make provenance, trust, and transparency non-negotiable. You need to know where data came from, how it’s been processed, and whether it’s fit for purpose. Without that, AI can produce outputs that look credible but are wrong — and without traceability, you can’t spot or fix the problem.  

You also talked about the idea of “digital public goods” in your keynote. What do you mean by that?  

A digital public good is something everyone can use, that’s openly available, and that serves the public interest — like open mapping data, or shared climate models. These are the building blocks for innovation, but they’re also safeguards. If core datasets live entirely inside private platforms, they can disappear, change, or become unaffordable.  

That’s why I also talk about Digital Public Infrastructure, or DPI — the foundational systems that society relies on, like digital ID, digital payments, and secure data exchange. India’s Aadhaar ID program and UPI payments network are great examples. But I believe location data and geospatial platforms belong in that same category. Without them, a lot of modern services simply wouldn’t function.  

Some people might say that’s overreaching — that maps and spatial data are just tools, not infrastructure.  

I’d disagree. Think about it: no e-commerce, no navigation, no disaster response, no urban planning works without reliable location data. Yet in many countries, that data is fragmented, locked away, or dependent on private providers. That’s not sustainable.  

AI makes this even more critical. Models need large, diverse, high-quality datasets to be useful. If your location data isn’t open, trusted, and well-governed, you’re handing over a core element of your decision-making to someone else. That’s a sovereignty issue.  

You’ve mentioned sovereignty a few times — why is it so central to this discussion?  

Because whoever controls the data controls the decisions. If a country doesn’t own or govern its core datasets, then AI-driven planning, logistics, or response systems are ultimately beholden to the data’s owner. That could be a foreign government, or a handful of private companies. That’s not a comfortable place to be if you care about resilience, security, or long-term economic independence.  

Let’s get specific. How is OGC helping on these fronts?  

We provide the standards and the neutral space where governments, companies, and researchers can work together without commercial or political agendas getting in the way. We make data interoperable, discoverable, and usable across systems and borders.  

In the AI space, that means working on standards for data provenance, semantic interoperability, and APIs that make it easier to trace and trust outputs. It means making sure location data is central and visible in AI pipelines, not just an anonymous feed.  

For people who aren’t deep in the tech, how does this affect them?  

It affects them every time they use an app to find the fastest route to the hospital, when a city decides where to build new housing, or when emergency services respond to a flood. If the data feeding those decisions is wrong, incomplete, or manipulated — and no one knows — the outcome can be life-threatening or wasteful. AI doesn’t remove that risk; it magnifies it.  

How do you balance the need for open data with privacy concerns, especially when AI can cross-analyze datasets in ways people don’t expect? 

That’s one of the biggest tensions we face. Openness doesn’t mean dumping sensitive data online. It means building governance frameworks, access controls, and anonymization into the infrastructure from the start. AI makes this even more important, because the ability to link datasets is so powerful — and potentially dangerous — if it’s not done ethically.  

What role do you see for the private sector here? Is this just a government responsibility?  

No, in fact, the private sector often moves faster and brings innovation that governments can’t. But public interest must be safeguarded. That means strong public–private collaboration, shared standards, and agreements on how core datasets are maintained and governed. We’ve seen great models of this in satellite data sharing for disaster response, where companies and governments both win.  

If you could give one piece of advice to policymakers about AI and geospatial, what would it be

Don’t treat location data as an afterthought in your AI strategy. If you’re investing in AI for public services, make sure your location data infrastructure is robust, open, and well-governed — otherwise, you’re building on sand.  

So in a sentence, what’s the main takeaway from your keynote?  

 Don’t let AI eat geospatial without a fight. We need open, trusted, and sovereign location data so AI serves the public good, not just private or opaque interests.  

⸻  

The conversation at GeoIgnite was just the start.  

The questions Peter raised — about trust, sovereignty, and the role of open, well-governed location data in the age of AI — go far beyond the geospatial sector. They touch national policy, economic competitiveness, public safety, and innovation.  

OGC will continue to convene these discussions with governments, industry, and researchers in the months ahead, including at the OGC iDays in Frankfurt, Germany (December 08-10, 2025). Whether you are a policymaker shaping AI strategies, a technologist building new services, or a city leader making data-driven decisions, your voice is critical.  

We invite you to join the dialogue — connect with OGC, participate in our working groups, and help ensure that AI and geospatial evolve together in ways that are open, trusted, and serve the public good. 

David Legris

David Legris

David is a GIS Technician living and working in Prince George, British Columbia. After spending 5 years in Thailand as an English teacher, David has returned to Canada, recently completing the GIS Advanced Diploma program at BCIT.

View article by David Legris

Be the first to comment

Leave a Reply

Your email address will not be published.


*