On 28 April 2026, UK Technology Secretary Liz Kendall delivered a speech setting out the government’s approach to artificial intelligence. The framing was explicitly strategic. Artificial intelligence was described not simply as a technological development, but as a source of economic and geopolitical power. Control over compute, systems and deployment was presented as central to national security and long-term prosperity. Around 70 per cent of global AI compute, she noted, is now controlled by just five companies.
Two broad shifts were set out. First, a more deliberate effort to support British AI capability, particularly in areas where the UK can establish leverage. Second, closer coordination with international partners—especially so-called “middle powers”—to shape how AI is deployed, including through the development of shared standards. It is this second strand—standards and deployment—that is likely to have the most immediate relevance beyond technology policy.
When ministers talk about “setting the standards for how AI is deployed,” they are not referring only to safety frameworks or technical benchmarks. They are also, implicitly, referring to how information is produced, circulated and trusted. Artificial intelligence does not operate in isolation. It enters an existing information environment—one already characterised by high volume, rapid response cycles and fragmented attention. Its primary effect is to increase the supply of content within that system.
Artificial intelligence expands communication. It does not automatically expand influence.
The production of political communication has historically been constrained by time, cost and capability. Artificial intelligence removes many of those constraints. Campaigns, parties and advocacy groups can now generate material at scale, respond almost instantly to events, and maintain a continuous presence across multiple platforms. The immediate effect is more communication. The more consequential effect is what that does to attention.
Attention does not scale in the same way as content. As the supply of communication increases, the relative value of any individual message declines. In high-volume environments, audiences rely more heavily on cues such as source, credibility and prior belief. The content itself becomes less decisive than the context in which it appears. Artificial intelligence accelerates this dynamic. It does not remove the need for credibility. It makes it more central.
For political campaigning, this creates a structural tension. On one hand, AI lowers the barriers to participation. Smaller organisations can operate at a scale that previously required significant infrastructure. Campaigns can test, iterate and respond with a degree of flexibility that was not previously available. On the other, the increase in volume risks diluting the signals that political systems respond to.
In Westminster, influence remains mediated through relatively stable mechanisms: constituency pressure, organised advocacy, party management and stakeholder engagement. These are structured and attributable signals. They are not easily replicated by generalised digital activity. An increase in loosely organised communication does not necessarily strengthen those signals. It can, in some cases, make them easier to discount.
This is where the question of standards becomes more significant. If artificial intelligence increases the volume of communication, then standards—whether formal or informal—play a greater role in determining what is trusted, what is acted upon, and what is ignored. The government’s emphasis on shaping international approaches to AI deployment reflects this. Standards are not simply technical safeguards. They are mechanisms through which certain forms of information are treated as legitimate.
Artificial intelligence expands communication. It does not automatically expand influence. Campaigns can produce more content, more quickly, and distribute it more widely. But political outcomes remain shaped by whether activity translates into signals that are recognised within institutional structures. That distinction is unlikely to disappear. If anything, as communication becomes more abundant, it is likely to become more pronounced.
For those engaged in political campaigning, the practical question is therefore not simply how to use artificial intelligence, but how to operate in an environment where visibility is easier to achieve and harder to convert into meaningful effect. The strategic question, as framed in the Technology Secretary’s speech, is about control, standards and long-term capability. At the level of political communication, it is about something more immediate: what counts—and continues to count—as a signal that matters.