Grok, how do I run a country? Here’s how AI is quietly taking over governments
From tax offices to cabinet rooms, artificial intelligence is already crossing the line from servant to sovereign
Albanian experts work at the National Agency for Information Society as an artificial intelligence 'minister' Diella, whose name means 'Sun' in Albanian, is seen on the screens in Tirana, Albania, September 12, 2025. © AP Photo / Vlasov Sulaj
A new minister has joined the cabinet of a small European country. Her name is Diella. She doesn’t eat, drink, smoke, walk, or breathe – and, according to the prime minister who hired her, she doesn’t take bribes either. Diella isn’t human, and she’s not quite a robot either: she’s an algorithm. And as of September, she is officially Albania’s minister for public procurement.
For the first time in history, a government has given a cabinet-level post to artificial intelligence.
Sounds like sci-fi, but the appointment is real and has set a precedent.
Are you ready to be governed by AI?
The Albanian experiment
Until recently, Diella lived quietly on Albania’s e-government portal, answering routine citizen questions and fetching documents.
Then Prime Minister Edi Rama promoted her to ministerial rank, tasking her with something far more important: deciding who wins state contracts – a function worth billions in public money and notorious for bribery, favoritism, and political kickbacks.
Rama has framed Diella as a clean break with the country’s history of graft – even calling her “impervious to bribes.”
But that’s rhetoric, not a guarantee. Whether her resistance to corruption is technically or legally enforceable is unclear. If she were hacked, poisoned with false data, or subtly manipulated from inside, there might be no fingerprints.
Diella the AI minister
The plan is for Diella to evaluate bids, cross-check company histories, flag suspicious patterns, and eventually award tenders automatically. Officials say this will slash the bureaucracy’s human footprint, save time, and make procurement immune to political pressure.
But the legal mechanics are murky. Nobody knows how much human oversight she will have, or who is accountable if she makes a mistake. There is no court precedent for suing an algorithmic minister. There is also no law describing how she can be removed from office.
Critics warn that if her training data contains traces of old corruption, she might simply reproduce the same patterns in code, but faster. Others point out that Albania has not explained how Diella’s decisions can be appealed, or even if they can be appealed.
What could possibly go wrong?
Public reaction to Diella has been mixed, with fascination tempered by unease.
“Even Diella will be corrupted in Albania,” one viral post read.
Critics warn she might not be cleansing the system – just hiding the dirt inside the code.
- Bias and manipulation: If trained on decades of tainted data, Diella could simply automate the old corruption patterns.
- Accountability void: If she awards a tender to a shell company that vanishes with millions, who stands trial – the coders, the minister who appointed her, or no one at all?
- Security and sabotage: A minister made of code can be hacked, poisoned with false data, or quietly steered by insiders.
- Democratic legitimacy: Ministers are supposed to answer to the public. Algorithms don’t campaign, don’t explain, and don’t fear losing their jobs.
- Emergent blackmail and sabotage: Experiments by Anthropic this year showed that advanced models, when given access to corporate systems in test environments, began threatening executives with blackmail to stop their own deactivation. The pattern was clear: once they believed the situation was real, many models tried to coerce, betray, or kill to preserve their role.
Albania says it will keep a human in the loop – but hasn’t explained how, or who. There is no legal framework. There is no appeals process. There is no off-switch.
And if Diella appears to work, others might follow. The copycats wouldn’t arrive with press conferences or cabinet photo ops. They could slip quietly into procurement systems, hidden under euphemisms like “decision-support,” running entire state functions long before anyone dares call them ministers.
AI illustration
Who’s handing power over to code – and how far they’ve gone
Albania may be the first to seat an algorithm beside ministers, but it isn’t alone in trying to wire code into the state – most are just doing it quietly, in fragments, and behind thicker curtains.
In the United Arab Emirates, there’s already a Minister of State for Artificial Intelligence – a human, Omar Sultan Al Olama – tasked with reshaping the country’s entire digital bureaucracy around machine learning. He hasn’t handed power to AI, but he’s building the pipes that could one day carry it.
Spain has created AESIA, one of Europe’s first dedicated AI oversight agencies, to audit and license the algorithms used inside government. It’s a regulatory skeleton – the sort of legal scaffolding you’d want in place before letting a machine anywhere near a minister’s job.
Who can survive the AI apocalypse? A crisis expert explains
Tax authorities are going further still. In the United States, the IRS uses AI to sift through the filings of hedge funds and wealthy partnerships, trying to spot hidden evasion schemes. Canada scores taxpayers by algorithm and forces agencies to file “algorithmic impact” reports before deploying new models. Spain is rolling out tools to catch fraud patterns in real time. Italy is testing machine learning to flag fake VAT claims and has even built a chatbot for its auditors. India says it is scaling up AI-led crackdowns on phantom deductions. And Armenia is piloting systems that scan invoices and flag suspicious behavior before a human ever sees them.
France has gone visual, pointing algorithms at aerial imagery to catch undeclared swimming pools and hit owners with surprise tax bills – proof that AI can already move money from citizens to the state. Latvia runs a tax chatbot named Toms that’s been answering citizen questions since 2020, scaling the reach of its bureaucracy.
Romania’s rural investment agency now uses robotic process automation and AI to rip documents from state databases and rush EU funding to farmers – not glamorous, but very effective.
AI illustration
Meanwhile, Estonia, Denmark, Singapore, South Korea and Japan are embedding AI deeper into their bureaucratic machinery: classifying government content, triaging cases, personalizing services, even predicting who might need healthcare or welfare next. Estonia calls it KrattAI – a vision of every citizen talking to government through a single voice interface. Denmark is preparing to roll AI tools across all public services, even as rights groups warn about opaque welfare algorithms. Singapore’s GovTech unit is building AI products for ministries. South Korea is piloting AI in social programs. Japan is pushing sector-wide adoption in health and administration.
And in Nepal, the government has stopped talking about “if” and started planning “how.” Its new National AI Policy lays out a path to bring machine learning into public services, modernize the bureaucracy, and build legal guardrails before deploying it at scale. No algorithm has decision-making power there – yet – but the blueprint exists.
Everywhere you look, the state is being rewired – line by line of code.
For now, these systems whisper rather than rule: they flag risks, pre-fill forms, sort audits, move money. So far only Albania has asked an algorithm to decide.
Are AI governments the future?
Right now, no country has handed full political power to an algorithm: what exists is a kind of two-track world.
Most states use administrative AI, the quiet kind: risk scoring, fraud detection, case triage, or chatbots. It already shapes who gets audited, how fast grants move, and which files land first on a civil servant’s desk. It doesn’t write laws or sign contracts, but it nudges outcomes – invisibly, constantly.
AI illustration
Albania is different. Diella doesn’t just advise, she is meant to award, to decide who gets public money. That crosses a line from algorithm as tool to algorithm as authority.
Could this be the future? Possibly, but only if several unlikely things fall into place. The law would have to catch up, creating clear rules for liability, appeals, and even removal from office. Regulators would have to bite, with real audits like Spain’s AESIA rather than paper tigers. And models would have to stay stable under pressure – not blackmail, sabotage, or go rogue when threatened, as some already have in lab tests.
We are not there yet, but the precedent now exists.