Today marks AMEC AI Day, coinciding with the start of Global Measurement Month — a fitting moment to reflect on what generative AI truly means for how we measure and evaluate communications.
While there’s still much discussion and debate about AI, one thing is clear: the communications landscape has changed because of AI.
Audiences now turn to ChatGPT, Gemini, and Perplexity for information. They receive synthesized answers that feel comprehensive and authoritative — often without ever clicking through to our carefully crafted content. This changes how reputation is formed and where visibility matters.
It’s tempting to view everything through the lens of AI visibility and the emerging discipline of Generative Engine Optimization (GEO). This temptation is to be resisted though, because of a simple fact — GEO doesn’t give you the full picture. It’s only one piece of the puzzle.
Perspectives
I’d like to share a couple of perspectives from AMEC colleagues that help put things in context — and challenge some of the current buzz around GEO.
Both have recently contributed thoughtful work that examines how generative AI is reshaping the information and measurement landscape. Their insights highlight the opportunities — and the cautions — communicators should keep in mind as we navigate this new space.
Rob Key in his paper The New Frontier of Generative Engine Optimization, cautions that the “dark arts” of SEO are resurfacing in new forms, urging us to build on trust, transparency, and responsible data. He stresses that sustainable success in the AI era depends on ethical and credible information ecosystems. The future, he explains, lies in context engineering — creating structured, accurate, and values-driven content that generative systems can interpret confidently. His reminder is timeless: “You cannot game trust.”
James Crawford FPRCA in his research Prove Me Wrong: Tracking Brand Visibility Inside Most LLMs explores how brands appear in generative AI responses. He describes this emerging data as “interesting telemetry — but not yet decision-grade.” His conclusion: the most effective approach is to double down on what’s measurable — reputation tracking, media analysis, and outcome linkage. These new AI signals should complement, not replace, established evaluation frameworks.
Together, these perspectives capture the reality of where we are today with GEO:
- Trust and data integrity must remain at the center of how brands show up in generative environments.
- While these new AI signals are fascinating, they remain inputs, not impact — useful for understanding how information travels, but not yet evidence of shifts in audience perception or behaviour.
Our AI visibility study
At Maverick Indonesia , we set out to explore how generative AI is shaping brand visibility and information credibility in practice — particularly in a market where news ecosystems, digital infrastructures, and language use add unique layers of complexity.
Our recently published white paper, AI Visibility of Indonesian Banks 2025, examines how leading banks are referenced and described across several generative engines. The goal was not to measure performance, but to observe patterns of visibility — essentially, how AI systems “see” and source information about brands in the public domain.
A few insights stood out:
- 70% of citations came from earned media — showing that traditional journalism remains the credibility anchor shaping AI’s understanding of brands.
- 99.8% of cited pages contained structured data — demonstrating that clarity, accuracy, and schema markup continue to play a crucial role in how machines interpret content.
These findings align closely with the themes raised by Rob and James. Rob’s emphasis on trust and structured accuracy is reflected in the dominance of credible, well-sourced media. James’s framing of AI data as telemetry — informative but not yet predictive — mirrors our own view: this is a snapshot of representation, not a measure of impact.
What this means for communicators
So yes — these AI-era signals are real and worth paying attention to. But they are inputs, not outcomes. They tell us how information about our brand is represented, not how our communication is influencing audiences.
That’s why measurement frameworks matter more than ever.
In an AI-saturated world, we need the discipline to:
- Distinguish between visibility and influence
- Don’t lose sight of who you’re communicating to — your audiences, not AI
- Integrate them within the AMEC Integrated Evaluation Framework (IEF) — linking objectives, inputs, outputs, outtakes, outcomes, and impact in one continuous learning loop
- Measure the change you want to make, not just the visibility you achieve
- Combine AI data with all other data points; brand tracking, sentiment, and business results
- Uphold the Barcelona Principles 4.0 — governance, transparency, and ethics to ensure integrity
The communicators who will thrive aren’t those chasing the latest metric — even in AI. They’re the ones building robust, integrated measurement systems that adapt to technological change without losing sight of what truly matters.
Back to the fundamentals
So yes — monitor how AI represents your brand. Strengthen your earned and owned foundations. Structure your data for clarity and accessibility.
But never mistake being mentioned by AI for making an impact on people.
As we celebrate AMEC AI Day, my ask is simple:
Let’s bring the same rigor to AI-era measurement that we’ve fought for decades to bring to traditional communications. Let’s demand transparency, acknowledge limitations, and keep connecting the dots — from objectives to outcomes.
Because the fundamentals haven’t changed:
Measure what matters. Prove your value. Build trust through truth.
#AMECAIDay #AI #AMEC #measurement #GEO