Trust is the foundation of good communications. But in today’s AI-powered world, where deepfakes and synthetic newsrooms can manufacture reality faster than fact-checkers can blink, and the lack of current and accurate information for AI to consume can impede people’s understanding of reality, building trust and understanding can be a real challenge.
AI is Amplifying the Problems of Misinformation and Disinformation
Misinformation and disinformation are no longer fringe threats, they’re strategic weapons amplified by AI and can damage reputations, influence markets and put employees at risk.
With the velocity of information accelerating, the potential for disinformation is more acute.
Despite the risks, most leaders still aren’t ready. According to IBM, 63% of organizations lack AI governance policies, while nearly all those that experience AI-related incidents admit they didn’t have adequate access controls in place. That can cost companies millions and credibility.
Preparation Can Mean the Difference Between Leading the Conversation and Being Led by It
Too many brands assume they can improvise a crisis response when a deepfake, bot swarm or AI-generated smear campaign hits. That’s a mistake. Start work on a response plan now.
Perhaps you have a response plan, but you aren’t practicing it. Gather the troops to simulate situations and make sure everything is covered. Without practice, coordinated messaging and pre-approved communications channels, you can lose control of the narrative fast.

Creating a ‘Paper Trail’ Now Can Strengthen Your Position in the Future
Trust must be built long before it’s tested.
Consider a company falsely accused of environmental damage. If the company was transparent about its actions long before the crisis, it would likely be in a better position to weather the storm. Perhaps it published safety data, sustainability updates and community reports well in advance. This created a digital “paper trail” of credibility. So, when disinformation hit, the company didn’t need to argue on social media, it simply pointed to the receipts.
Such reputational resilience can inoculate an organization against the AI-driven disinformation campaigns that are only going to get smarter, faster and harder to detect.
Understanding What AI is Saying About You and Helping AI Tools Get It Right Is Crucial
Sometimes, misinformation or the lack of information isn’t malicious, but it can still create problems. For example, one of our clients was planning a spinoff from Intel. Bospar’s team conducted an analysis of AI platforms and found that tools such as ChatGPT were pulling outdated web data about the company and surfacing inaccurate information. Our GEO team stepped in to effectively reprogram the AI by flooding the web with new authoritative content.
This supported a successful, accurately described launch that generated more than 500 stories and a four times increase in the company’s web traffic. The things that Bospar learned along the way also led our PR agency to introduce Audit*E, a tool that empowers companies to analyze their competitive positioning across multiple AI platforms. Audit*E differs from SEO tools because it’s focused on large language model optimization (LLMO) and AI platform visibility, revealing how companies can improve their performance on popular AI platforms, allowing them to benchmark against competitors and enabling users to track improvements over time.
The Bottom Line
AI is already extraordinarily powerful, and in the coming months, agentic AI systems will start making faster, more independent decisions, many without human oversight.
That makes proactive communication even more critical.
By formulating and practicing crisis response, and publishing current and honest data for both people and AI to consume, businesses are better positioned to defend themselves against misinformation and disinformation, create clarity and build and rebuild trust.