AI, Power, and Responsibility: Understanding the Stakes in a Changing World

The International Institute for Middle-East and Balkan Studies (IFIMES)[1], based in Ljubljana, Slovenia, is renowned for its regular analysis of global developments, particularly focusing on the Middle-East, the Balkans, and other significant regions worldwide. A notable contribution comes from the Rt Hon Geoffrey Hoon, who is the Former UK Secretary of State for Defence. Hoon's article, “AI, Power, and Responsibility: Understanding the Stakes in a Changing World,” explores about the artificial intelligence revolution and the need for its regulation.

The Rt Hon Geoffrey Hoon, Former UK Secretary of State for Defence

 

 AI, Power, and Responsibility: Understanding the Stakes in a Changing World

 

 

It’s a privilege to be here[2] with defense professionals, technologists, scholars, and strategic thinkers—each committed to shaping a secure and resilient global future. I extend my sincere thanks to Professor Anis H. Bajrektarevic, to IFIMES, and to the Global Academy for the Geo-Politico-Technological Futures for organizing this vital series of discussion on artificial intelligence and robotics—topics that increasingly defines the trajectory of our global civilisation: the understanding—and the implications—of Artificial Intelligence.

Having spent the better part of my career immersed in questions of national defence, strategy, and governance, I have often been confronted with new technologies that force us to recalibrate both our expectations and our responsibilities. From the Cold War's nuclear stand-off to today’s digital battlefield, each generation faces its own version of transformative risk. In our time, that risk—and that promise—carries a name: Artificial Intelligence.

I. The Strategic Importance of Understanding AI

When I first entered public service, technology was already beginning to reshape military doctrine. Precision weaponry, network-centric warfare, unmanned systems—these were harbingers of a revolution in defence affairs. But Artificial Intelligence is different. It is not merely another instrument in our arsenal; it is a force multiplier, a decision-maker, and potentially, a policy-shaper in its own right.

AI systems can now interpret satellite images with greater accuracy than human analysts. They can autonomously monitor cyber threats, control drone swarms, and conduct real-time logistics coordination with minimal human oversight. In the future, they may be entrusted with decisions that bear lethal consequences. The strategic implications are profound, and they extend well beyond the battlefield.

But AI is not limited to defence. It is woven into our healthcare systems, financial institutions, transport networks, and increasingly, our democratic processes. In short, the domain of AI is the domain of governance itself.

During my time in the UK Parliament, particularly as Secretary of State for Defence from 1999 to 2005, I had the responsibility of overseeing some of the nation's most significant investments in military research and emerging technologies. I worked closely with our scientific communities, defence contractors, and NATO partners to integrate innovation into national security policy. Even then, we saw the early seeds of what AI could become—tools that could enhance decision-making, protect lives, and redefine modern warfare. Today, that frontier has expanded dramatically, making it even more vital that we understand the forces now shaping our world.

II. The Necessity of Ethical and Democratic Oversight

Let me be blunt: technology does not arrive with a built-in moral compass. AI is created by people—trained on data often riddled with bias, shaped by the commercial priorities of powerful companies, and deployed in contexts where transparency is elusive.

This demands rigorous ethical scrutiny. What are the principles that should guide the development and deployment of AI? Who decides when an AI system is sufficiently trustworthy to make decisions about a person’s freedom, livelihood, or security? These are not questions for engineers alone. They require the active engagement of policymakers, ethicists, civil society, and above all—citizens.

As a former Defence Secretary, I know that in moments of crisis, decisions are made at pace. But speed must never come at the expense of accountability. Any military or governmental use of AI must be embedded in clear lines of democratic control, oversight, and public legitimacy.

III. The Geopolitical Landscape

We must also view AI through the lens of geopolitics. The global race to develop and dominate AI is often framed as a contest between great powers. The United States and China are investing vast sums into AI research, with Europe seeking to carve out a third way—one rooted in human rights, privacy, and regulatory rigor.

This is not just a technological competition; it is a clash of values. If we believe in democracy, in individual liberty, and in the rule of law, we must build AI systems that reflect and reinforce those values. That means resisting the temptation to adopt opaque surveillance models in the name of efficiency or control.

It also means forging international agreements to prevent the weaponisation of AI in ways that could destabilise global security. Just as we established treaties around nuclear arms, we must now consider similar frameworks for autonomous weapons, algorithmic warfare, and the misuse of AI in hybrid or information conflicts.

IV. AI and the Future of Work

Of course, the AI revolution will not remain confined to governments or tech giants. It is already changing the way ordinary people live and work. Some fear that AI will replace jobs wholesale—drivers, accountants, even journalists. And there is truth in that concern. But we should not be paralysed by fear.

Instead, we must invest in education and retraining, so that the workforce of the future is equipped to work alongside AI—not be displaced by it. If AI can automate tedious tasks, then human beings can focus on what we do best: creative thinking, empathy, leadership, and moral judgment.

But this transition must be managed. Governments have a duty to anticipate the social impact of AI and to cushion the blow for those who may be left behind. A just transition, not an abrupt upheaval, is the imperative of our time.

V. A Call to Informed Citizenship

Ultimately, the “Understanding of AI” must be a societal project. It is not enough for a handful of experts or regulators to understand these systems. Every citizen deserves to know how decisions that affect them are being made. Every student should learn not just how to use AI, but how it works—and why that matters.

This is why this programme is so important. By fostering public literacy in AI, we strengthen the democratic foundations of our society. We make ourselves more resilient to manipulation, to inequality, to authoritarian misuse.

In the years ahead, we will need more than innovation. We will need wisdom—collective wisdom. We will need leadership that understands that the choices we make now will echo for generations. And we will need vigilance. Because the stakes are high.

Conclusion

Let me close with a reflection. When I served in government, we faced threats that were visible, tangible, and in some cases, predictable. AI is different. It is diffuse, embedded in code, and often acts invisibly. But its effects will be anything but hidden.

As I explore in more detail in my recent book, See How They Run, leadership today demands both foresight and humility. We have a choice. We can treat AI as an opaque force that simply happens to us. Or we can understand it, shape it, and guide it in accordance with our deepest values. 

Let this be the moment we choose understanding over ignorance, governance over chaos, and humanity over hubris.

Ljubljana/London, 16 June 2025


[1] IFIMES - International Institute for Middle East and Balkan Studies, based in Ljubljana, Slovenia, has a special consultative status with the United Nations Economic and Social Council ECOSOC/UN in New York since 2018, and it is the publisher of the international scientific journal "European Perspectives."

[2] This article presents the Opening Address, delivered by the Rt Hon Geoffrey Hoon, former UK Secretary of State for Defence, at the session: "Understanding AI and Robotics" Global Academy for the Geo-Politico-Technological Futures (GPTF), Session IV, 12 June 2025. The views expressed in this article are the author’s own and do not necessarily reflect IFIMES official position.