Have you ever shared a document to your broader team, only to realize a glaring mistake slipped in – misinformation that was so off base, you can only know it to be the result of an AI hallucination?
It’s the latest fear for PR professionals, who are tasked with a high volume of written output, and may not yet be equipped with the newest career skill necessity: AI literacy.
According to IBM, AI hallucinations are a phenomenon where a large language model (LLM), often a generative AI chatbot or computer vision tool, perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
AI has the ability to increase our output and give PR professionals more time dedicated to creative endeavors. But it also creates a new set of challenges that requires upskilling.
Verify Everything Before You Publish or Share
AI is a great starter for a blog, pitch angle or a source for research – but we’ve noticed a few things AI tends to get wrong. This list includes names of important people, data, inaccurate quotes or quote attribution, reporter/outlet misalignment and more.
This means that we always think of AI more like a new assistant or intern. While they can technically contribute meaningfully, you still need to double check work, as mistakes are highly likely.
For AI, this can mean double checking source links and cross referencing with a Google search. The rule of thumb: any information that has a clear and verifiable accuracy needs to be checked by a human prior to sending to a client or media.
Build Guardrails into Your Workflow
PR professionals working within an agency should have a level of understanding when it comes to their leadership’s regulation (or lack thereof) when it comes to employee AI use. Whether it is highly stringent or relatively lax, you should be familiar with their expectation of AI use.
We view AI as a way to speed up processes and push us to think differently. While we champion the use of AI for creative use, we have standard expectations to ensure data privacy.
- Prior to putting any documents through an AI platform, the document must be scrubbed of all identifiable names and features that track back to the client
- If AI is in use, all verifiable facts are checked outside of the AI platform
- For anything media-oriented, write-ups and quotes are checked via an AI checker such as ZeroGPT or GPTZero to ensure writing is confirmed majority human-drafted
Also worth noting: Depending on your agency’s AI platform of choice and membership level you may have the ability to set your AI temperature. According to IBM, temperature controls the randomness of text that is generated by LLMs during inference. This means that you can dictate the level of creative or “random” output your AI platform conjures up based on your prompt. Safer and more expected responses will occur on low temperatures settings, while more curveball-oriented responses will appear at a higher temperature. This also means an increased likelihood of AI hallucinations.
Train Your Team to Spot AI Hallucinations
No golden rules currently exist when it comes to spotting AI hallucinations. But we can learn as we go and share these lessons with the team around us.
A few suggestions we recommend as we collectively hone our AI literacy:
- If it’s too good to be true it likely is. If AI gives you a super niche and shockingly perfect data point for the pitch angle you were looking for, it’s likely not fully accurate or showing a complete picture. Dig a bit deeper prior to running with it.
- Broken links or vague sourcing is concerning. If verifying information takes more than a couple of minutes, you should be weary.
- When asking for data or research of any kind, place written guardrails within your AI prompt. I typically ask that it only reference data that is less than two years old, U.S.-oriented and comes from academic/scholarly resources, government and well-regarded public polling groups such as YouGov and Pew Research Center.
- Circulate examples when things go wrong! PR professionals will learn faster if they are in an agency that readily shares real-world examples of the mistakes AI is making. (I’ll go first. My AI search recently incorrectly referenced Microsoft CEO as Nadella Satya, rather than Satya Nadella!)
Too many PR agencies today are paralyzed when it comes to deploying or regulating the use of AI within their teams. Rather than acting with strategic intent, they stall at the starting line amid internal debates or fear of getting things wrong in a rapidly evolving environment. This hesitation is understandable, but will ultimately make the difference between success and failure for PR agencies.
The velocity of AI innovation demands a proactive, not reactive, strategy. Bospar has approached this challenge by taking smaller bites out of AI awareness, understanding and adoption – from launching the first PR counsel at scale through Push*E, to understanding GEO (generative engine optimization) and changing our strategy based on AI scraping. AI adoption doesn’t have a finish line, but thanks to a number of AI adoption sprints, we are well beyond the start line.