On International Day of Women and Girls in Science, Ask Who Controls AI

Image credit: AI-generated image

On paper, the problem looks simple: women remain underrepresented in science.

Across much of the world, women are not absent from science, but the appearance of progress is largely confined to entry points. Globally, women now represent around half of all university graduates, including in science-related fields. That shift happened decades ago. Since the early 2000s, women’s participation in tertiary education has risen steadily, and in life sciences, health, and environmental disciplines, women have long been the majority at the student level.

What has barely moved is everything that follows.

According to the latest consolidated data from UNESCO Institute for Statistics, women represent roughly 33% of researchers worldwide, a figure that has shifted only marginally in the past decade.

If the system were merely slow to adjust, two decades of near-parity at entry would have produced visible change where authority concentrates. It hasn’t. Instead, women stall or step away at the stages where real influence begins — where funding decisions are made, where research agendas are shaped, where authority becomes durable.

That is the part International Day of Women and Girls in Science rarely dwells on.

February 11, 2026, arrives with a theme that links artificial intelligence, social science, STEM, and finance to inclusion. The framing acknowledges that science is no longer shaped by education pipelines alone. Power now sits in systems — algorithms, funding structures, evaluation frameworks, and governance decisions.

AI is the pressure point in that equation.

As AI becomes embedded in how research is evaluated, funded, and scaled, it reshapes where authority sits in science. The risk is not exclusion from AI tools, but control over how those tools are designed, governed, and deployed.

AI in Science Concentrates Power

In theory, AI lowers barriers: faster analysis, automated discovery, scalable insight. In practice, it centralizes control.

Building and deploying AI in science requires access to compute, data pipelines, institutional backing, and capital. These resources are concentrated in a relatively small set of organizations and leadership circles where women remain underrepresented.

The result is uneven authority.

Who decides which datasets are “good enough” to train models?
Who defines performance metrics?
Who has the mandate to deploy systems at scale and to stop them when they fail?

These decisions are rarely transparent and rarely neutral.

Bias in AI Is Measurable

Across sectors, empirical studies have shown that algorithmic systems reproduce historical bias when trained on skewed data or deployed without oversight. Hiring algorithms, credit scoring systems, medical diagnostics, and risk models have all demonstrated this pattern.

Science carries the same risk, and then some.

Credibility in science already runs through gatekeepers such as peer reviewers, citation systems, grant panels, and reputational networks. These are human systems shaped by judgment and incentives. When AI tools are layered onto them — ranking papers, flagging “high-impact” work, or triaging grant proposals — any existing bias does not disappear. It becomes embedded in software and scaled.

Yet AI governance in science remains largely technical, focused on model accuracy rather than institutional impact. The question of who benefits is often deferred.

Where the Gender Gap Matters Most

As noted earlier, women make up roughly 33% of researchers globally. The figure matters most at the levels where authority concentrates. In engineering and technology fields, representation is significantly lower (often between 20–28 percent depending on region), and in computer science specifically it is frequently below 25 percent globally. The steepest decline occurs at senior and decision-making levels.

These are the levels where AI systems in science are designed, governed, and approved.

So, while women and girls are encouraged to “participate” in AI-enabled science, they remain structurally underrepresented in the roles that determine how AI reshapes scientific work itself.

The Question February 11 Should Be Posing

The inclusion of social science and finance in the 2026 theme points to where change actually happens.

Social scientists have spent decades documenting how careers in science are shaped; not just by talent, but by incentives, evaluation rules, and informal networks of credibility. Who gets cited; who gets invited; who gets funded. These patterns are measurable and persistent, including in institutions committed to equity.

The lever that determines whether they shift is funding.

If inclusion does not show up in grant criteria, procurement requirements, and investment decisions, it does not scale. AI will optimize for whatever the system already rewards. It will rank what is already visible. It will fund what already looks credible.

International Day of Women and Girls in Science is often framed as a moment to inspire participation.

Participation is not the constraint.

If AI is becoming the operating system of science, the real question is whether institutions are prepared to change who holds authority within it or whether they are simply automating the same hierarchy.

Until that changes, February 11 will remain symbolic, and the systems shaping science will continue operating as they always have.

 

Anusuya Datta

Anusuya Datta

Anusuya is a writer based in the Canadian Prairies with a keen interest in connecting technology to sustainability and social causes. Her writing explores how geospatial data, Earth Observation, and AI are reshaping the way we understand and manage our world. Disclosure: Anusuya currently works as a Content Strategist at EarthDaily. This article is written in an individual capacity, and the views expressed do not necessarily reflect those of EarthDaily or its affiliates.

View article by Anusuya Datta

Be the first to comment

Leave a Reply

Your email address will not be published.


*