Another possible cause of crisis: LLM Poisoning

  • Publication date October 21, 2025
  • Last updated October 21, 2025
  • Category Blog

Here’s another reason why communicators with crisis management responsibilities need to take the LLM or AI Brand Safety seriously: LLM Poisoning.

20251021-121146-Gemini_Generated_Image_gplgx6gplgx6gplg.png

LLM Poisoning is a cybersecurity threat in which adversaries intentionally insert malicious, misleading, or manipulated data into the training or fine-tuning datasets of large language models (LLMs). The goal is to corrupt the model’s learning, introduce vulnerabilities (like backdoors), or bias its outputs.

 

For communicators, however, the significance of LLM poisoning lies in the ability of bad actors to directly affect the answers AI tools give about a brand. All they have to do is to seed news articles, blog posts, PDFs, or forum comments with biased, inaccurate, or outright false narratives about a brand in sites that are indexed.

 

study by the UK AI Security Institute, the Alan Turing Institute, and Anthropic has found that someone posting as few as 250 “poisoned” documents online can influence the recommendations or answers that AI tools such as ChatGPT, Gemini and Perplexity give to prompts about your brand.

 

What this means for PR professionals is that they need to monitor the high-authority signal sites where AI taps into to answer prompts about your industry and brand. They need to be well-versed with Generative Engine Optimization and concepts so that they would know what and where to look for such disinformation, and then take steps to counter them.

 

All this points to a need for regular AI visibility monitoring reports, much like how PR professionals need media monitoring reports now to keep them informed of developments and emerging issues or alerts to possible crisis-like situations.

 

Yet how many brands in Indonesia have AI visibility monitoring. For that matter, how many agencies provide this service?

 

Realising that there is a gap between the need for AI monitoring and the dearth of supply for such services, Maverick has introduced AI Brand Safety monitoring and reporting as part of its MavGeo Services.

 

If you are a brand manager of corporate communications head of a corporation you might want to explore this service - before your LLM ecosystem gets poisoned.

Ong Hock Chuan
Author
Ong Hock Chuan
News and Views