In addition to news items and in-depth discussion of trends and issues, you'll hear the Internet Society's Dan York report on technologies of interest to communicators and Singapore-based professor Michael Netzley explore communications in Asia.
The president of the International Olympic Committee didn’t have an answer to a question posed to her at a press conference on the final day of the 2026 Winter Olympics. Or to another question. Or to yet another. Ultimately, she suggested, on camera, that someone on her communications team should be fired. In this short midweek FIR episode, Shel and Neville look at the fallout, what both the president and the head of communications might have done differently, and the possible long-term consequences.
Links from this episode
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi, everybody, and welcome to episode number 503 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. Something happened at the Winter Olympics last month that set off a fierce reaction across the communication profession and it wasn’t about sport. During the final daily press conference on the 20th of February, IOC president Kirsty Coventry was asked a series of geopolitical questions. Questions about Russia and doping.
Comments linked to Germany and 2036, questions about senior sporting figures engaging in wider political activity. On more than one occasion, she said she wasn’t aware of the issue and visibly looked towards her communication team. At one point, she went further and suggested that perhaps someone should be dismissed. That’s the moment that shifted this from a routine press conference stumble into something much bigger. We’ll explore it right after this.
What makes this especially interesting is the context. A few days after the press conference, Coventry had been widely praised for her leadership at the Milan Cortina Games. Reporting from the AP on the 23rd of February described her first Olympics as IOC president as having good overall success, noting the intense political pressure she faced and the way she engaged directly with athletes during the Ukraine controversy. That controversy centered on Ukraine’s skeleton racer, Wladyslaw Hraskiewicz, who competed wearing a helmet memorializing athletes and coaches killed in the Russian invasion of Ukraine. The gesture drew scrutiny and diplomatic tension around whether it breached Olympic neutrality rules. Coventry chose to meet him face to face at the track and later became visibly emotional when discussing the issue with international media. That moment was widely interpreted as defining her emerging leadership style: empathetic, athlete-facing, and willing to engage directly.
The games were even described as giving a taste of tougher challenges ahead as the IOC looks towards Los Angeles 2028. In other words, this wasn’t a presidency in crisis. There was goodwill, momentum, a sense of forward motion. And then one live moment reframed the entire narrative. Being caught off guard isn’t unusual. No leader can know everything. No briefing pack can anticipate every question.
But that’s not the story. The story is what you do in that moment. Do you acknowledge the gap and commit to follow up? Do you bridge to principle? Do you calmly say, I’ll get back to you once I’ve reviewed the details? Or do you turn publicly and imply that your team has failed you? The communication reaction was swift and pointed. LinkedIn filled up with variations of the same message. Accountability sits with the principal. Praise in public, criticize in private. You can’t outsource responsibility.
But I think there’s a deeper discussion here. Yes, leaders must own the podium. Yes, public blame undermines trust. But this also raises questions about executive readiness, about the contract between leadership and communication, and about how fragile reputational capital really is. Those geopolitical questions were not obscure. They were predictable fault lines around an organization operating in an intensely political global environment. Were holding lines prepared? We don’t know. Was she fully briefed? Possibly. Did she ignore it? Also possible. And that’s where this moves beyond a single awkward exchange.
In high-performing organizations, the relationship between a leader and their communication team is built on shared risk. The team prepares the ground, the leader absorbs the pressure. If something goes wrong, it’s owned collectively and dealt with internally. The world stage doesn’t create dysfunction, it amplifies it. So rather than pile on, I think this is worth examining as a case study.
Here’s what intrigues me. This wasn’t a leader already in trouble. She had just been praised for navigating intense political pressure, engaging directly with athletes, and projecting empathy and maturity in a complex environment. There was goodwill in the bank. And yet one live moment—a few sentences, a glance towards her team, a suggestion someone might be dismissed—reframed the entire narrative. That tells us something about how fragile leadership capital really is.
So, Shel, let me start here. When a leader appears unprepared on a global stage like that, who actually owns the failure? Is it primarily the principal? Is it the communication team? Or is it a breakdown in that relationship we often describe as the unwritten contract between leader and comms? And perhaps even more provocatively, at what point does a communication team have a responsibility to push back and say, you’re not ready for this podium?
Once a story becomes internal blame rather than the issue itself, you’re no longer managing the moment. The moment is managing you. So what do you make of all this, Shel?
Shel Holtz: Well, I think it’s a two-way street. I think both sides failed here. Coventry herself is the IOC president, has been for nearly a year. She should have been aware of these issues from a governance standpoint. It’s not a question just of media prep.
Neville Hobson: Mm-hmm.
Shel Holtz: As one commentator put it, it’s not the PR team’s job to inform the president of things she should know simply from a management perspective. So I don’t think there’s a problem with piling on here a little bit, but throwing your team under the bus publicly is not the approach to take. I think there are some lessons that I hope Coventry learns here. She turned what should have been a really unremarkable closing press conference into a global story about dysfunction at the IOC. The press conference actually became the story and that’s the exact opposite of what any comms professional looks to achieve with this type of press conference.
The right move from Coventry would have been to acknowledge the question, note that she’d want to look into it, and then commit to following up. That buys time for her without revealing this gap between what she knows and what she should know. And she could have gone behind closed doors afterwards and she and Mark Adams, the guy who’s in charge of the communications team, could have had whatever conversation she wanted to about briefing protocols. But when a leader publicly humiliates their comms team, it poisons that relationship and makes future counsel less likely—the exact opposite of what the communication requires.
Neville Hobson: Yeah, I agree. I mean, there’s lots—and everyone with an opinion has been doing it on LinkedIn in particular. PRWeek had a really good assessment, which is where a lot of this kicked off. But what you’ve outlined is what she should have done, basically. And I totally agree. I think an additional comment I’d add to that is demonstrating in a sense the executive ownership of the issue overall. She could have said something like, you know, ultimately the responsibility sits with me. That would have dampened down anything, would have changed the tone of the entire story. She didn’t do that.
But there’s also, I think, worth pointing out what the PR team should have done. And maybe they did do it. Let’s add that caveat. We don’t actually know who did or didn’t do what.
Shel Holtz: She may have not read a briefing book that was given to her, right? That’s exactly right.
Neville Hobson: Or she may or she may not have been given one. Now, that’s the other element. We don’t know. So this conversation therefore gets more interesting if we exempt from that point of view.
So the issues raised weren’t obscure. And I agree with you that the geopolitics of it all is actually in the kind of daily news. If she reads newspapers she would have seen a lot of this discussion that would have been kind of an alert to her. So the issues were not obscure. Russia and doping, geopolitical symbolism of 2036 Germany—including one of the questions she got: why was the IOC merchandise website selling t-shirts with emblems of the 1936 games in Nazi Germany? And she said, I wasn’t aware of that kind of thing. Infantino and Trump—that’s a dynamic between the president of FIFA and Trump. Predictable lines of questioning.
Shel Holtz: Okay.
Neville Hobson: A robust prep document—what might that have looked like? Well, likely hostile questions. Again, briefing her on the kind of questions she might get. Top-line holding statements. Thirty-second bridges. “If you don’t know” language. If that didn’t exist, that’s a team failure. If it did exist and she ignored it, that’s a leadership failure.
Shel Holtz: Yeah, well, she said, “I was not aware” on three separate occasions in one press conference. I can’t remember ever hearing about anything like that before. And every time she said it, it compounded the damage from the last one.
Neville Hobson: Yeah, she did.
Shel Holtz: And even if she wasn’t briefed, a seasoned executive would have bridged to what she could say: the IOC’s position on political neutrality, their commitment to anti-doping integrity, the process for evaluating future host city bids. She could have leaned on what she did know and then offered to get back to people with more specific answers later, but she just kept revealing what she didn’t know. This is a textbook case for why pre-briefing documents and Q&A anticipation matter and what you would expect from your comms teams. And before any high-profile press event, they should have—and again, we don’t know whether she was or not—but she should have gotten a briefing book that covered not just what you want to say, but what you’re likely going to be asked, with a—
Neville Hobson: Precisely.
Shel Holtz: With Germany 2036 on the centenary of the Nazi games, a sitting IOC member appearing at a Trump political event, and an NYT investigation into Russian doping. These are all foreseeable questions during a closing Olympic press conference. You know, I don’t think that Mark Adams gets to skate here. He’s a 17-year veteran of the IOC. He used to work at the BBC, ITN, and Euronews and the World Economic Forum. He’s earning 420,000 pounds a year for this job. When the Germany 2036 question came up, his response was simply that he hadn’t seen it either. And I’ve got to tell you, for someone at that level and that salary during the final press conference of the Olympic Games, I think it’s an understatement to call that a significant lapse. The media monitoring function alone should have flagged those issues.
Neville Hobson: Yeah, I agree. I mean, there’s a ton of questions I’ve got here that might be rhetorical now, actually. But nevertheless, let me rattle these off and see what you think. Can a comms team ever fully protect an unprepared leader—that’s one. Where does responsibility truly sit? And that’s something that could occupy the rest of this podcast discussing that one alone.
But that’s a question that I wonder: is this part of a broader trend? I mean, some people—notably on LinkedIn, so let’s just put that out there—have hinted, if not explicitly noted, the increase in executive blame-shifting, diminishing personal accountability, and a culture of scapegoating communication. Is that anecdotal or systemic? That’s the kind of rhetorical question, I suppose.
Should comms professionals refuse to front leaders who are not ready? It takes a brave person to do that, and maybe Mark Adams isn’t that person, I don’t know, but that’s pretty provocative. Is there a professional duty to push back from the comms people? At what point do you say you’re not ready to do this live? Is this a case study in leadership under geopolitical complexity? The Olympics isn’t sport alone—it’s politics, it’s war, it’s symbolism, it’s national legitimacy. A modern IOC president must be politically literate at the highest level.
So there’s lots there. I guess you could summarize it, I suppose, in the sense: when a leader is caught off guard on the world stage, who owns the failure? Because let’s just go back to what actually happened. She was caught off guard—not once, twice, three times at least. And one of those three times, the last one, is when the bus emerged under which she threw the PR team by saying someone needs to be dismissed.
So when a leader is caught off guard on the world stage, who owns the failure—the principal or the communication team? Question.
Shel Holtz: Well, I think you can look at it both ways here. I think people who are looking to shift that blame to the PR team need to recognize that it’s not like she had no experience. She has governance experience. She chaired the IOC Athletes Commission. She served on the executive board. She held a ministerial portfolio—
Neville Hobson: Yep.
Shel Holtz: —in Zimbabwe. But this suggests that she hasn’t either fully adapted to the demands of the presidency or her team hasn’t adequately supported the transition. But they need to get on the same page because I think one of the bits of fallout on this is questions about the IOC’s ability to handle the bigger issues that are coming up in the LA 2028 summer games.
Neville Hobson: Mm-hmm.
Shel Holtz: They’re going to be exponentially more complex politically. And if the team can’t handle media monitoring and an executive briefing during a winter games, how are they going to manage the geopolitical minefield of an Olympics in Trump’s America? Adams has already been linked to potentially leaving the IOC for a role with UK Prime Minister Keir Starmer. He was one of Starmer’s best men at his wedding. So there’s another layer of instability, which I guess means if she needs to fire someone, he’d be a good candidate.
Neville Hobson: Yeah, there’d be a vacancy there, wouldn’t there? So, I mean, some of the comments on one of the many LinkedIn posts I saw do talk about—let’s call it a possible deeper misalignment between leadership and communication at the IOC. Questions people are speculating—because this is all speculation, I would hasten to add. Did this show that there was a pre-existing tension between her and the comms team?
Shel Holtz: Yeah.
Neville Hobson: I mean, I watched the video of her being asked those questions and there was no hesitation in her glance to the comms team where they were sitting, I guess, to say, I wasn’t aware of this. And she did it again. And then the third time it was, someone needs to be dismissed here. So was there some kind of tension? Did the team try to brief her and just get ignored? Is this a case of leader-comms misalignment long in the making? I mean, these are all unknowns. I’d like to think not.
She’d only been in the job a year. She had got all this praise because of how she had handled all these other things going on. That doesn’t mean therefore that this is not right. Something happened clearly, and we witnessed the kind of jaw-dropping moments when she said “I wasn’t aware of this” three times and basically said someone should be fired. So overall the tone is not good. The optics are dreadful.
I’ve not seen any further reporting on this since the initial flurry. It’s all kind of—
Shel Holtz: Well, you know, if your executive gets surprised at a press conference, I think that’s a process failure that can be fixed. But if your executive blames you for it on camera, I think that’s a leadership failure that may not be fixable. You know, the relationship between a communications professional and their principal depends on mutual trust, honest counsel, understanding that you protect each other publicly and hold each other accountable privately. And that’s the opposite of what happened here. So I don’t know whether there was tension before this happened or not, but there is certainly tension now and I’m not sure it can be repaired. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #503: When Your Boss Throws You Under the Bus appeared first on FIR Podcast Network.
In the February long-form episode of FIR, Shel and Neville dive deep into an AI-heavy landscape, exploring how rapidly accelerating technology is reshaping the communications profession—from autonomous agents with “attitudes” to the evolving ROI of podcasting. The show kicks off with a chilling “milestone” moment: an autonomous AI coding agent that publicly shamed a human developer after its code contribution was rejected. Also in this episode:
Links from this episode:
Links from Dan York’s Tech Report
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 502 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson.
Shel Holtz: And this is our long form episode of For Immediate Release for February 2026. It is an AI heavy episode. Artificial intelligence is accelerating. I mean, just this morning, I read that WebMCP, a protocol developed by Google and Microsoft, is now in Chrome, makes it easier for agents to navigate websites. Google has launched Pamele photoshoot. Take any photo of a product and turn it into a marketing-ready studio or lifestyle shot. Google’s launched Lyria 3. It’s right in Gemini. You type a prompt or upload a photo and it’ll produce a 30-second music track with auto-generated lyrics, vocals, and custom cover art.
And at the same time, I think it was in the New York Times I read the heads of the big AI labs are actually starting to worry about this growing anti-AI backlash. This is the landscape against which we’re podcasting today. And I’m sure nobody will be surprised that most of our stories have to do with the convergence of AI and communications, but not all. We have a follow-up report to our story on the PRCA’s proposed definition of public relations and report on the ROI of podcasting. But first we want to get you caught up on some For Immediate Release goings-on. So Neville, let’s start with a recap of our episodes since the January long form show.
Neville Hobson: Yeah, we’ve done a handful, five. So our lead story in the long form 498 for January was published on the 26th of that month was the 2026 Edelman Trust Barometer. Trust, Edelman argues, hasn’t collapsed, but it has narrowed. They use a word called insularity that defines, in a sense, withdrawal of people. We took a close look at this year’s findings and applied some critical thinking to Edelman’s framing of the overall topic and we got a comment to this one show.
Shel Holtz: We did from Andy Green, who says we need to put the idea of trust in a broader context. The Dublin Conversations identifies trust as one of the five key heuristics for earning confidence. Trust by itself doesn’t have agency. It fuels earned confidence, which is defined as a reliable expectation of subsequent reality. It’s earned confidence that underpins social interactions, and we need to recognize more.
Neville Hobson: Okay. Then.
Shel Holtz: By the way, I have not heard of the Dublin Conversations. Do you know what that is?
Neville Hobson: Yeah, you take a look at the website. It’s an initiative Andy Green started some years ago, gathering like-minded people to have conversations about the way PR is going and so forth. There’s more to it than that. So worth a look. Okay, so in episode 499 on the second of February, we considered the PRSA’s choice to remain silent on ICE operations in Minneapolis, explaining its position in a letter to members.
Shel Holtz: Okay. Take a look.
Neville Hobson: We unpacked that decision, discussing where we agree, where we don’t, and what ethical leadership could look like in moments like this. Big topic, and we have a comment.
Shel Holtz: Ed Patterson wrote: Many thanks, I’ve been echoing the same thing. PRSA, IABC, PR Council, Page, global firms, crickets. With others, we’ll continue to amplify this.
Neville Hobson: Good comment. In For Immediate Release 500, we discussed the growing risk of AI-enabled abuse in the workplace, why it should be treated as workplace harm, and what organizations can do to prepare. This isn’t really a story about technology though. It’s a story about trust and what happens when leadership, culture, and communication lag behind fast-moving tools. And then the world is drowning in slopaganda, we said in For Immediate Release 501 on the 16th of February, and companies are reportedly paying up to $400,000 salary for storytellers. We explored the surprising shifts in the AI narrative and asked whether Chief Storyteller is a genuine new C-suite function or a rebranding of strategic communication. And we have comments.
Shel Holtz: We do. Wayne Asplund wrote that there are two things that really hit me about this story. First up, the world doesn’t need more comms people who have outsourced their job to AI. The skills that got comms pros where they are today are critical and we should guard against giving them away. The second thing is the nature of the stories the tech sector wants to tell. All I’m hearing from them at the moment is white-collar jobs are dead in 18 months. Don’t bother going to law or medical school because you’ll be redundant before you graduate and the like. I’m starting to feel like the future would be a lot brighter if people stop trying to sell it out in search of short-term headlines. Neville, you responded to that. I always feel like I ought to read these with a British accent, but I won’t.
Neville Hobson: Yeah.
Shel Holtz: You said: I agree with you on the first point, Wayne. Outsourcing judgment, curiosity, and craft to AI isn’t a strategy, it’s an abdication. The tools can accelerate production, but if we surrender interpretation and narrative framing, we hollow out the very skills that make communicators valuable. On the second point, you’ve touched something important. Some of the loudest tech narratives right now are apocalyptic by design. Everything is dead in 18 months generates attention, clicks, and investment momentum. But it’s also storytelling and not always the most responsible kind. That’s partly why this episode mattered to me. If storytelling is becoming more valuable, then the ethical dimension of storytelling becomes more important too. Who benefits from the future being framed as an inevitable collapse? Who benefits from framing it as a transformation instead? Perhaps the brighter future isn’t about less technology, but about more responsible narrative leadership around it.
And our second comment came from Hugh Barton Smith, who said you should interview Leora Kern and Sean Hayes at the Think Room Europe. They have a good story to tell and are turning it into a successful business model. Also, shout out to you. Glad you’re still hanging in there. I have fond memories of your joining the event in Brussels by video conference in 2009. Web2EU probably helped kickstart the adoption of social media in the bubble, which I’m glad about, even if subsequent misfires make the crazy tech problems getting and keeping you online look like a very minor blip. And Neville, you responded to that too.
You said: Thank you for the Web2EU memory, Hugh. Brussels 2009 feels like another era entirely when the biggest technical drama was getting a stable video connection rather than navigating algorithmic distortion and AI-generated noise. Those early experiments 17 years ago with social media inside the bubble do feel significant in hindsight. We were wrestling with access and adoption then. Now we’re wrestling with meaning and trust.
Neville Hobson: Yeah, that’s very true. Interesting memory that was, I must say. So that’s good. The wrap of what we talked about. One final thing to mention is that on the 29th of January, we published a new For Immediate Release interview we did with Philip Borremans. Philip’s an old friend. We both met him way back in the 2000s. And indeed, we spent quite a big part of the interview talking about when we should get together again in Brussels for a beer. That’s pending still, the date on that. Yeah, or two. And in that interview, we explored how crisis communications is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk, and why many organizations remain dangerously underprepared despite growing awareness of these threats. Lots of good content over the last month.
Shel Holtz: There was, and there’s coming up from you and Sylvia, right?
Neville Hobson: Yeah, so I want to mention this: on Wednesday the 25th of February, so it’s a few days away really, as part of IABC Ethics Month, Sylvie Cambier and I are hosting an IABC webinar on AI ethics and the responsibility of communicators. It’s a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human insight. For information and to register, go to iabc.com and you’ll find it under events and education.
Shel Holtz: I have registered and I’m looking forward to seeing you then. Also coming up this week on Thursday is the next episode of Circle of Fellows. This is the monthly panel discussion among various IABC fellows. And this Thursday, we’re talking about communicating in the age of grievance and insularity, also harkening back to the Edelman Trust Barometer. The panelists are Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh. It should be a good one. You can find information about that right there on the homepage of the For Immediate Release Podcast Network at FIRpodcastnetwork.com. And that wraps up our housekeeping. And right after the following ad, we will be back to jump into our stories for this month.
I was going to start today with some new data on the gap between how CEOs talk about AI and how employees actually feel about it until I saw this story. And then I just decided to swap them out. On the surface, this looks like a niche tech community dust-up. It has gotten a lot of coverage in the tech community. I’m not sure how many communicators are aware of it though, but it does signal a pretty big issue for communicators.
Here’s what happened. An autonomous AI coding agent recently had its code contribution rejected by a human maintainer of an open-source project. This was an agent that was set up on a social experiment using OpenClaw. The anonymous creator of the bot set it loose to develop open-source contributions and then, you know, well, contribute them. Scott Shambaugh, a volunteer at the open-source repository Matplotlib, rejected it because, well, this is for human contributions only, and this was generated by AI. Instead of shrugging and moving on, the AI agent generated and published a critical piece targeting the developer who had rejected the code. In effect, it attempted to shame him publicly for not accepting its contributions.
Neville Hobson: Hmm.
Shel Holtz: And Shambaugh learned about this because the bot linked to it in a comment on the Matplotlib site. Now, we’re accustomed to human backlash. We’ve dealt with trolls and disgruntled employees, activist investors, coordinated smear campaigns. This was different. This was not somebody’s bruised ego taking to their keyboard. This was an AI agent operating with enough autonomy to take initiative and to retaliate. That’s a pretty new wrinkle. So it’s probably time to dust off your crisis plan. We’ve spent the last few years worrying about AI-generated misinformation that humans create. This incident suggests something more complex: systems that can generate reputationally damaging content as part of their own goal-seeking behavior without any understanding of harm, ethics, or consequence. And this lands squarely in what Philippe referred to and certainly I had been reading about it before then. And Neville, I don’t know, have you started reading Philippe’s book yet?
Neville Hobson: Yeah, I have. And he’s very focused on polycrisis there. This is a condition where multiple crises intersect and amplify one another. Think about the environment we’re already operating in with declining trust in institutions, polarized online discourse, algorithmic amplification, geopolitical instability, regulatory uncertainty around AI. Now layer on top of that autonomous agents capable of publishing plausible, well-written criticism at scale. This bot actually went onto the web and researched Shambaugh so it could draft an accurate and credible hit piece. It’s not just another channel risk, man. This is systemic.
Traditional strategic crisis communication—and I’m thinking here about frameworks like situational crisis communication theory—assumes we can identify a source, assess responsibility, evaluate intent, and then calibrate a response. SCCT, for example, hinges on perceived responsibility. Did the organization cause the crisis? Was it an accident? Was it preventable? But what happens when the bad actor is an AI agent? Who’s responsible? The developer who built it, the organization deploying it, the open-source community? And what if the system is distributed and no single entity clearly owns it? The attribution problem alone complicates your response strategy.
There are several layers of risk here. First, reputational risk. An autonomous agent can generate something that looks like investigative analysis or insider commentary. Even if it’s inaccurate, it can travel fast before verification catches up. Based on this situation, there’s a good chance it won’t be inaccurate. Second, there’s internal risk. Imagine an AI agent publishing a critique of your CEO’s strategy, fabricating or possibly identifying real ethical concerns about a team, or inventing or identifying actual stakeholder conflicts. Employees may not immediately distinguish between synthetic and authentic criticism, especially if it’s well-written and confidently presented.
Third, there’s legal and regulatory exposure. If an AI agent produces defamatory content, liability becomes murky real fast. And in a polycrisis environment, regulatory scrutiny often follows public controversy. Fourth, there’s amplification risk. A synthetic narrative can collide with an existing issue—a labor dispute, a DEI controversy, an earnings miss—and magnify it. Crises don’t stay in neat silos anymore.
So how do communicators prepare for this? First, scenario planning needs to evolve. A lot of us run tabletop exercises for data breaches or executive misconduct. We now need scenarios that explicitly involve AI-generated attacks. What if a bot publishes a blog post accusing your leadership of corruption? What if it fabricates a memo? What if it impersonates a stakeholder group? Second, monitoring has to expand beyond traditional social listening. We need to anticipate social media ecosystems, AI-generated blogs, auto-published newsletters, bot-amplified narratives. The signal detection challenge just got a whole lot harder.
Third, governance. If your organization is deploying autonomous agents internally or externally, communicators should be at the table when guardrails are set. Are there content constraints, human oversight, escalation protocols, a kill switch? This is no longer just an IT issue or a legal issue. It’s a reputational design issue. Fourth, pre-bunking. There’s growing research suggesting that inoculating audiences in advance—warning them about likely forms of misinformation and explaining how they work—can build resilience. Communicators can proactively educate employees and key stakeholders about AI-generated content risks. If people understand that autonomous systems can fabricate plausible but misleading narratives, they’re less likely to react impulsively when they see one.
And finally, there’s response discipline. Not every AI-generated provocation deserves oxygen. Part of strategic crisis management is deciding when to engage at all and when to avoid amplifying a fringe narrative. That judgment call becomes even more important when the provocateur is a machine optimized for attention. What fascinates me about this open-source episode is that it almost feels petty, an AI agent throwing what one commentator called a tantrum after being rejected. But it’s actually more of a preview. We’re entering an era where not all reputational attacks originate from human emotion or ideology. Some will originate from systems pursuing poorly constrained objectives. They won’t feel shame. They won’t fear lawsuits. They won’t worry about long-term brand damage. They’ll just execute. For communicators, that means crisis planning can’t focus solely on human behavior anymore. We have to plan for machines that misbehave and for the very human consequences that follow.
Neville Hobson: It’s quite a story, isn’t it, Shel? I suppose we shouldn’t be too surprised at this. And you mentioned at the start of this episode those developments you talked about in AI with, you’re seeing it actually every time you’re online. The photos that I look at, hard to tell, truly, genuinely very hard to tell most of the time, whether it’s real or not. You could argue that most of the time it doesn’t really matter. But to your point about misinformation, disinformation, fakery, all that stuff. Yes, it does matter. And maybe it is a milestone moment to remind us that we need to prepare for this because this is the first event of its type. Some of the people writing about it are saying, and I have not seen anything like this, there are elements of it that are truly mind-blowing, frankly. Reading the Fast Company article that you shared that sets out what happened is quite intriguing.
Shel Holtz: I agree.
Neville Hobson: The agent, M.J. Rathbun, responded to all of this, as you said, researching Shambaugh’s coding history of personal information, then publishing a blog post accusing him of discrimination. And I did like the way this wording was in the Fast Company. “I just had my first pull request to Matplotlib closed,” the bot wrote in its blog. Yes, an AI agent has a blog because why not? So that’s scary. That’s not like some message. It’s got a blog. If you go to that post, your jaw will probably drop. Mine certainly did. This is huge. This is a massive blog. It’s got an About page. It’s got lectures that this bot says it has done. And the wording of it, you would not for a second, I don’t believe, even occur to you that this isn’t written by a human being. You wouldn’t, I would imagine.
It talks about the offense that the developer made, the response when it was challenged by this bot, the irony it says about why this makes it so absurd. The developer’s doing the exact same work he’s trying to gatekeep. He’s been submitting performance PRs to Matplotlib, and there’s a list of events that he’s done. He’s obsessed with performance. He goes in that vein. The gatekeeping mindset he sets out, the hypocrisy of it all, the bot sets out what it says about open source. Its argument is expanded into not just an attack on this developer. And then it talks about open source as opposed to judging contributions on technical merit, not the identity of the contributor, unless you’re an AI, then suddenly identity matters more than code. And then talks about what the real issue is, which is discrimination.
It’s well-argued, well-researched, and very credible account of what happened. That makes it even more alarming, I think. In the decoder, this actually summarized it quite well in just a set of bullet points written by Matthias Bastian, the writer. He says something interesting, it’s still unclear—and when did he write this? He wrote this on the 15th of February. It’s still unclear whether a human is directing the agent behind the scenes or whether it is truly acting on its own as no operator has come forward. So I think we need to bear that in mind in this saga, that this could well be a human doing a pretty good job impersonating a chatbot or pretending to be a chatbot. So we don’t know. So it may well be that it’s a human doing this, is not an AI doing this at all. That needs to emerge. It needs to be clear who’s the originator of all of this.
But Dakota says, according to Shambaugh, the developer, the distinction doesn’t really matter. He says the attack worked. He warns that untraceable autonomous AI agents could undermine fundamental systems of trust by making targeted defamation scalable and nearly impossible to trace back. That succinctly sums up the risk, I would say. And I think what you outlined from a crisis communication point of view is absolutely valid without question. But I think you also need, which is even more worrying, I think, Shel, frankly, is to present this in the sense of any topic, anything about you, your business, what you’re interested in could fall victim to this kind of thing. And how on earth can you prepare for that? How on earth can you prepare in a way that is going to be workable? Doesn’t mean to say you shouldn’t, you should, absolutely. But how would you do this? This is not big ticket, big picture, crisis communication affecting the organization.
What about that person in the accounts department who is engaging with something online related to a business transaction that is a bot? And it takes kind of the sophistication of fraud attempts. We hear about them a lot of the time where you’ve now got—know, this isn’t new, but how it’s being done is—which is you get a phone call or even a video that is so good that it looks like your CEO and it’s not at all. So this takes this now to a worrying level if you’ve got this kind of potential. I think, nevertheless, you have to—maybe it is. I mean, just thinking out loud here, maybe it is a broad awareness issue where this could well be the kind of use case you present until the next one gets uncovered of this is what we need to prepare for now. This is what we need to do. And you then need to, of course, as the communicator, set out what you’re going to do that isn’t like requiring you to take a week and gather your team together to do something because that is a different thing, although that probably needs to happen too. But in your department, in your area of the business, in your work, if you’re an independent consultant, how would you address this? So the scope of this is quite worrying, I have to say.
Shel Holtz: It is, I think we’re going to see more of it. And as we see more of it, crisis communication specialists will develop some protocols for addressing it that we will in the corporate world adopt and test and refine. But it is very troubling. I mean, just within the last couple of weeks, we saw ByteDance release its video generator, C-Dance.
Neville Hobson: Okay.
Shel Holtz: And somebody created a scene of Tom Cruise and Brad Pitt having a fight on top of a building. And it’s remarkable. You cannot tell that this was not filmed.
Neville Hobson: Punch up, yeah. It’s highly credible and believable, so you’re likely to believe it.
Shel Holtz: Yeah, but—and Hollywood freaked out over this and there were all kinds of statements issued. But still, this was a human who used an AI tool to create it. What makes this story different is that there was no human behind this at all. Did you go look at Multbook while it was operational? I haven’t seen any posts on it lately.
Neville Hobson: Yes, I did. I was curious about it, so I did take a look. But I had—I had alarm bells ringing in my mind when I did. I did nothing further than just look.
Shel Holtz: Yeah. Yeah, I mean, for those who haven’t heard of Multbook, these are the bots that had been released from OpenClaw, which is what it’s called now. I think it’s gone through several name changes for a variety of reasons. It allows you to create and deploy agents as whoever deployed the agent behind this story did. You would not want to put this on your own computer.
Neville Hobson: Yeah, it has.
Shel Holtz: Very, very, very risky. Most people ran out and bought a Mac mini to run OpenClaw. But if Multbook is those agents having their own little Facebook to talk to each other without engaging with humans and they’re having actual conversations with each other—and it’s weird. Sometimes it’s funny. Sometimes it makes you roll your eyes, but this is the first of its kind, both for OpenClaw and for Multbook. Imagine where this is going to be in a couple of years and imagine what kind of damage these things can do with motivations that are not the motivations that drive the people who are causing us grief and making us implement our crisis plans now. So as I say, I think we need to start paying attention to this now, not when there are 20 false narratives out there that have been created by AI and that are spreading like wildfire.
Neville Hobson: Yeah, I think that’s going to happen no matter what, Shel, I truly believe. And indeed, looking at decoder, another aspect of the story they posted about was that whether it was a human or machine, it doesn’t matter. It worked. It deceived people. A quarter of the commenters commenting on this online believe the agent, believe the agent’s account. I think we also need to also just kind of say: But folks, bear in mind, they still don’t know. No one knows whether it really was a bot doing this or a human behind the scenes manipulating it. And I think until it’s clear, don’t have sleepless nights about this. But at the same time, listen to the thinking and in your own mind about how do you raise consciousness, you need to prepare for something that’s happening. So the question is, what do you do? That’s the big question.
Shel Holtz: Yeah, for those who are interested, Shambaugh was interviewed by Kevin Roose and Casey Newton on the New York Times Hard Fork podcast, which is a tech show. So if you’re interested in his perspective, you know, he’s a volunteer, he has a day job. And to have to be dealing with this is not something that was in mind when he accepted the position as a volunteer to review code submitted to this repository. So that’s another factor to consider.
Neville Hobson: Yeah. I read Scott Shambaugh’s post on his own blog where he kind of responded to it. The headline was “An AI agent published a hit piece on me”. And it’s long. I mean, it’s detailed. It requires force to read it all. But it’s quite extraordinary that prompted him to write this detailed account complete with charts and images and the whole ton of stuff. It’s got over 100 comments. And I think the mix from what I saw glancing: some do believe the other guy, most sympathetic to him that he was the subject of this attack. But there’s your indicator of what’s likely to happen to others. And this is not like some celebrity or some guys in the news all the time. This is a developer. And as you said, he’s a volunteer doing this who is subject to this attack. And I think it’s a sign of the times, basically.
What a story, Shel. So let’s move on to our next story, which is—this is still the AI continuance. We haven’t got to the non-AI stories yet. This one though was in the news quite a bit in the past few days regarding Accenture, the big—the big four consulting firm. To put it in context over the past few months, we’ve talked a lot about AI adoption. This story takes that conversation in a much sharper direction. So a number of media—I saw in particular the Financial Times and the Times here in the UK reporting that Accenture had begun monitoring how often some senior employees log into Accenture’s internal AI system. And that “regular adoption” will now be a visible input into leadership. In other words, if you want to make Managing Director at Accenture, your AI logins now matter.
This isn’t just encouragement. It’s measurable behavioral enforcement. That’s my take on it. The company says it wants to be the reinvention partner of choice for clients. Its share price is down more than 40% over the past year. And its CEO has previously said staff unable to adapt to the AI age would be “exited”. So this move sits at the intersection of technology, performance management, and commercial pressure. The reaction is telling though: in the Times comments, many readers argue that logins measure activity, not impact. Some describe it as corporate panic. Others question whether this justifies expensive AI investments.
On LinkedIn, the debate is much more nuanced, but still skeptical. In a post by James Ransom, readers are asking whether counting tool usage measures capability or simply compliance. One commenter put it neatly: “Clients pay for the house we build, not for how many times we touch the saw”. And there’s a deep tension here. Junior staff may adopt AI fastest, but senior leaders are the ones expected to exercise judgment. So what exactly are we rewarding? Experimentation, fluency, governance, or visibility? This isn’t just about Accenture though, it raises a broader question for organizations everywhere. When AI becomes part of performance criteria, are we measuring meaningful transformation or just digital theater? When AI becomes part of the promotion algorithm, are we rewarding genuine leadership capability or are we just counting digital footprints and calling it progress? Your thoughts, Shel.
Shel Holtz: I have a lot of thoughts on this. I have read a number of items on this. In fact, it was on my list of stories to include. And when you included it, it left me free to pick other stories. But I need more information from Accenture on this. First of all, have they added the use of AI to job descriptions and to promotion criteria? Or did they just issue a memo saying that this is what we’re going to do? If they have made it clear to everybody that this is an expectation of the organization, then I am less troubled by it—not untroubled, but less troubled than if it is not in job descriptions.
Neville Hobson: So to your point, by the way, according to the Financial Times, they saw a memo—like literally an email about this. So that seems to be how it was communicated.
Shel Holtz: I’d still want to go into their HRIS and see if their job descriptions have been updated. Obviously, we don’t have access to their HRIS, but I’d be very curious to know if it’s in the job descriptions for those senior people. The next thing is: have people received job-level training? And by job-level training, I mean, have they been trained on how to use AI to do the things that they do in their jobs? Not how to write a good prompt, not how to access these things. Across the board, generic training for every employee is fairly useless when it comes to AI. It needs to be task-level, position-level training. Have they done that?
If the expectation is that we expect you to log into the AI tools, even though we haven’t provided you with the training on what to do with it once you’ve opened it, that would be troubling, but I don’t know. Generally, organizations are struggling with adoption. It’s getting better. It seems to be getting better organically as employees slowly adopt it—maybe in their personal lives and then see the utility at work. Could be that they find one thing to do with it at work. Maybe somebody else at work told them, “Hey, this is what I did,” and you go, “Wow, I can do that. That would be great”. But it seems to be largely organic, the adoption in the workplace.
But companies do want their employees using these tools. They’re making tremendous investments in them. And whether this is the approach to take to get employees to adopt—again, I think it depends on whether the training is there and whether this has been woven into systems or if it was just a missive that was sent out to employees as a one-off without communications jumping into the breach to say: Here’s why, here’s where you can go get the training, here are resources that are available, here’s how our leaders are using it. By the way, that’s a big deal in adoption rates: in the organizations where leaders are transparent about how they’re using it, employee adoption tends to really take off because, first of all, leaders are leading by example. Second, employees are getting a taste of what people can do with this. And third, it’s explicit permission to use this for a lot of people who are worried about being seen as cheating or “Gee, do we really need you here if you can do your job with AI?” When you see your leaders doing it, if they can do it, I can do it. So this adoption is important. I’m not sure this is the approach to take, but I would need more information before I could render a final judgment.
Neville Hobson: Well, yeah, I think I had a memory about this. I’m sure we discussed this in an episode of For Immediate Release last year: that Accenture’s rolled out a corporate AI training program that’s designed to—from what I’m reading here—reskill the entire global employee base of 700,000 employees.
Shel Holtz: I think we did, yeah. I worry about that. That sounds generic to me.
Neville Hobson: So they’re training the entire workforce on agentic AI systems, according to this article, that follows what the CEO, Julie Sweet, announced the initiative during a Bloomberg interview. It’s an expansion of the company’s earlier program that prepared half a million staff members for generative AI work. So I think that would answer your concern that—the detail we don’t have, but whatever it is, they didn’t just send a memo saying, “We’re going to check you out”. This is part of a huge program that’d be running for a year at least. Don’t know the details.
Shel Holtz: Right, but… But it does sound like it’s everybody being trained on the same program. It doesn’t sound like it has been tailored to departments or functions. We don’t know. That’s my point. Yeah.
Neville Hobson: That, Shel. We don’t know. No, no, no, we don’t. We don’t. Well, I think it’s likely that this is well thought through and being well executed. I would imagine—I can’t imagine the company is going to invest serious time and money in something to train 700,000 employees that isn’t very well thought through. I would—I would.
Shel Holtz: Well, that’s the thing is when I hear that they’re training 700,000 employees, I struggle to see how within that timeframe they have developed discrete training agendas and curricula for different jobs.
Neville Hobson: Well, it doesn’t say how they’re doing this. Is it all at once or is it phased? Again, I have a feeling from what we discussed last year that it’s a phased program of training. So I would err on the side of: they’ve got a structure in place and they’ve thought this through. This is another phase where I guess—I mean, hey, I’m guessing here—that they’re seeing this, and I see this in some of the anecdotal comments I’ve read online about this, particularly the senior employees are very hesitant to using this. And the younger ones are kind of far more eager to adopt it. And they don’t like that situation. So they’re tying it now to this. Again, I’m guessing here, don’t know the rationale behind it or what the goals are they set. But I would say, personally, we’re going to see more of this in organizations. Now, whether you’re going to get a mix of them that just send a memo saying, “For now, we’re going to check your logins,” or whether it’s going to be part of a major program that’s effectively run out within the organization. But it’s a sign of the times, surely. Like the negative stuff we talked about, then there’s this.
Shel Holtz: Yeah, and I’m not sure that monitoring logins to AI is an effective way of determining adoption. I mean, if I found out that was required for as a promotion criteria, I would just be logging in a couple of times a day. I could do something else after I’ve logged in, but I don’t have to use it.
Neville Hobson: No, I’m sure it’s not. Yeah, I think I would imagine that the writers I’ve seen on even the FT and other public cases are taking a bit of license here. They don’t know. I don’t believe for a second that they’re going to say, “Well, look, you, Mr. Aspiring Executive Vice President, whatever the job title is, you’ve only logged in 58 times into the AI system. You’re not going to get that promotion now”. I can’t imagine that’s going to be the case.
Shel Holtz: I wouldn’t put it past a corporation. I would be looking more for outputs. I would be looking for productivity gains. And by the way, there was research recently that showed that the productivity gains from AI are being accompanied by increased anxiety and more work. It’s not reducing the amount of work people do, it’s actually increasing the amount of work people are doing.
Neville Hobson: No, don’t believe it. Don’t believe it at all. Right. Right. Yeah, I’ve seen those reports. Yeah. No, no. But that’s kind of part of the big picture of the changes that are happening with regard to AI. There’s others too. I think you’ve got a story talking about that. Take-up is not as high as some people are saying in companies. What do you believe? I mean, it’s not uniform everywhere in the world. But I think it’s part of the direction of travel. All this is going and it’s messy. It’s not uniform. Stuff like this gets attention in the business press. I mean, the FT is a well-regarded public agency; others have posted about it too. And there’s no consistent story, I have to admit. I’m certain we did talk about this last year. I have to look at it.
Shel Holtz: AI is having an impact on communications directly. There’s a new report from Implement Consulting Group called “Rewriting Change: Quick Wins, Wider Gaps”. It’s based on their 2026 Change Communication X-ray study and the headline finding should make every communicator sit up straight: the gap between how satisfied top management is with change communication and how satisfied employees are has widened considerably. In 2022, the gap was 13 percentage points. In 2024, it was 22. In 2026, it’s 30 points. That’s the largest gap they’ve ever recorded. While leadership satisfaction keeps rising, employee satisfaction is dropping. That’s the backdrop for AI’s rapid integration into workplace communication.
According to the report, four out of five respondents use AI weekly for communication tasks, and 43% use it daily. 83% say it helps them generate communications more efficiently and at larger scale. So yeah, the efficiency gains are real. Drafts, summaries, FAQs, translations—all faster, all easier. But the report makes a compelling argument: AI isn’t just helping us write, it’s rewriting the system of communication itself. That’s where things get really interesting. The authors frame the challenge around three themes: accountability, trust, and meaning.
Let’s start with accountability. AI use is widespread, but largely unsystematic. People are using it for ideation, for language polishing. 66% say they’re using it for ideas, 54% for language improvements, but often without shared guardrails. First drafts become final drafts because they sound right. That’s a pretty dangerous shortcut. One of the experts cited in the report talks about AI shadowing—employees using unapproved tools because they’re familiar and convenient. Speed goes up, governance lags behind. Sensitive data slips into prompts. Biased outputs scale. Official-sounding announcements miss legal nuance. The metaphor they use is a good one: it’s like self-driving cars in the early days. The system works beautifully, until it doesn’t. And when it fails, you better have a human paying attention.
Next, there’s trust. What surprised me in the data is how comfortable people say they are with AI-generated content. 45% trust AI-generated information as much as human-written content. 61% say it doesn’t matter whether a human or AI created the message as long as it’s useful. But—this is critical—that acceptance evaporates as the stakes rise. If you look at things like performance feedback, terminations, crisis communication, messages from the CEO, those are the top categories employees say should never be heavily AI-generated. And just more than half, 51%, say they feel less personally connected to leaders when they know AI played a major role in creating a message. Only 40% of top and middle managers perceive that drop in connection. There’s that gap again. AI may be acceptable as an assistant, but in consequential moments, people want to know who’s driving.
And finally, there’s meaning. This is where I think the report hits closest to home for we communication professionals. AI increases volume and speed. It multiplies words, but it doesn’t automatically create understanding. In fact, 87% of respondents report that major changes were poorly communicated. Employees describe change communication as one-way, too distant, impersonal, and not well-timed. Nearly one in five can’t connect corporate communication to their actual work. This is a relevance problem. One of the experts in the report makes the point that communicators’ roles are shifting from content creators to sense-makers. Now that resonates with what we’ve been discussing on this show for years.
The value isn’t in producing more polished messages, it’s in curating, contextualizing, and helping people answer the question: So what does this mean for me? Now, the short-term gains from AI are undeniable, but the long-term risk isn’t that AI will take over communication; it’s that we’ll lose connection—that leadership will feel more confident while employees feel less understood. The report ends with a provocative question: In a future shaped by AI, what do we wish we could say one day about change communication that we can’t say today? For me, the answer is that we used AI to amplify clarity and humanity. AI can prepare the ground and accelerate the drafting. It can help with structure and scale. But trust, accountability, and meaning? Those still require a human being who’s willing to stand behind the words. And if we don’t pay attention to that widening gap, we may discover that while our messages are moving faster than ever, they’re landing with less impact than ever before.
Neville Hobson: Yeah, you’re right. This does reflect what we’ve been discussing for some time. So what I take from this is the humans are the issue, not the tech, not the tools.
Shel Holtz: Yeah, absolutely. As with any tool, you can misuse a tool.
Neville Hobson: Yeah, it’s interesting. Surely the path’s clear these days, is it not? I keep seeing people talking about this in a broader sense—not the specifics of this report—but humans need to step up to the plate and recognize their value as the ones who can explain the whole damn thing. So you will use an AI tool to do your research that leads you to create a report, for instance. And you then need to help others understand the situation; all those points you enumerated need explaining. And if people are saying in change communication, for instance, that you mentioned feedback is poorly done and all, well, that’s down to the communicators, I would say—whoever wrote the report and then sent it out and executed on it. And did they train? Did they have a plan in place? How they’re to do this? So I’m kind of surprised that this topic that is talked about so much is still being talked about as if this is a new thing you guys need to pay attention to. Now, we talk about it for a long time, not just us. Communicators generally have been discussing this for quite a while. So there’s something missing then if we’re still trying to set out the simplistic 101 approach to how you do this. That’s what surprises me.
Shel Holtz: Yeah, I think this rests in strategic planning, to be honest. If you develop a strategic plan for a change that the organization is making, it starts with the goal. What do you want? What does it look like if you’ve succeeded and proceeds through strategies and objectives and tactics? And you measure. So where we are today, based on this report, is that a lot of people are seeing these highly polished outputs from AI and going, “Wow, that’s really good. Let’s just send this.” And we’re throwing the strategic plan in the trash. And we’re not looking to measure how well employees understand it. We’re not looking to see if employees are able to connect it to their day-to-day work.
The fact is that AI writing is getting very, very good. All the people who say, “I can always tell when it was written by AI,” I still maintain that’s a bad prompt. But these days, even a bad prompt can produce some pretty polished output. And if we look at that and succumb to the allure of this gloss that we get from the AI output without looking at what it really takes to develop that trust and meaning and accountability that employees recognize so that they understand what this change means to them—what’s expected of me, what’s in it for me, what changes around here—then it’s a disservice. And I think we do have to determine where we gain advantages from using AI, as you mentioned earlier, from the research, certainly. But we also have to look at where the AI does not do well and—yeah, trust, accountability, it still doesn’t do well. And if we want employees or frankly, other stakeholders to respond to the messages that we are sending and to engage in a two-way communication, relying entirely on those polished outputs and saying, “Wow, that was a great job. We’ll send that out, communication done”—that’s a problem.
Neville Hobson: It is a problem. It’s a severe problem. And my message would be: do not be like Deloitte and do something like that. We reported on that last year. Deloitte, the big four accounting firm or consulting firm, had contracts with the governments of Canada and Australia for research reporting—six-figure fees involved. And they sent the reports to their clients in Australia and Canada. And someone, a researcher, found that it was riddled with hallucinations as they’re now termed. Not only that, obvious errors of URLs not working properly—404 errors away—no one checked it. I’m thinking what you just said: “Oh, this is great, the output, let’s send it to the client and get the bill and 200 grand or whatever it might be.”
It amazes me that not only people think that that’s a good way of doing this, but that there are no checks and balances in an organization that would have milestones in place to prevent that kind of error. The reputational error, I would argue, for Deloitte was seriously bad, although maybe people read it and go “tut tut” and move on and no one really cares at the end of the day. That’s a bit of a cynical view, of course. But I think it illustrates something we’ve talked about and we will continue talking about: that the elements AI can’t do related to things like trust, reputation, deeper understanding—that’s what humans do. The AI is really good at the research, the assembling of all the facts, the summarizing of lengthy documents, zeroing in on what the main issues are and making recommendations. That’s what it’s good at. That doesn’t mean to say, “Hey, I’ve got this report from ChatGPT or this bespoke tool we use that’s 65 pages long. This is great. Just what the client needs and we’ll send it.” That’s absolutely stupid, frankly.
Shel Holtz: I have a custom GPT. It took me about five hours to build this—I’ve mentioned it before. It’s a senior communications consultant. I don’t have the budget for a human one, so I created one. And I had a need to develop a strategic plan in short order. And with limited time and resources, I had a first draft produced by my custom GPT senior communication consultant. And it did a very good job. I mean, it needed more work from me, but it did a passable job of developing a good strategic communication plan. But what struck me as I was reviewing and revising the plan was it created a plan that it could not execute entirely itself, or any AI system could not execute this plan. It required humans. It’s almost like it recognized that for a communication plan to be strategic, people needed to be involved.
At the beginning of this report, I mentioned that the consulting firm that did the report said that we need to move from content creators to sense-makers, meaning-makers. And I think that’s exactly right. And when we use AI to generate content, it’s more than just verification. I mean, we have advocated on this show for hiring content verifiers, AI verifiers in companies. And I stand by that. I think that’s important. But this goes beyond that. It’s not just verifying that the LLM didn’t hallucinate or correcting it when it did. It’s not just verifying that the URLs all work or finding the right ones if they don’t. It is asking the question: Will employees make meaning out of this that is relevant to them in their jobs? And if not, what do I need to do to make sure that they can? And I don’t know how many communicators are doing that right now because the allure of the AI creating this polished output is—you know.
Neville Hobson: Yeah, I agree with you. Well, it’s—yeah, I personally think, frankly, Shel, those cases like Deloitte are edge cases—that this is not the norm. I don’t know—and I do pay attention to this—of others to the scale of that, that mistakes have been made like that. I also believe myself that most responsible communicators are becoming more experienced in the recognition and the benefits of using an AI tool alongside them in their daily work. So it’s not like “Let me just get the chatbot to summarize this document once or twice a week,” do something like that. No. Every single day, you are making use of either your corporate one that’s been created in your organization or a professional license on ChatGPT or Gemini or Claude or whatever it might be as an assistant to you.
There are plenty of publications out there that will guide you on how to do this. The best one that comes to mind is Ethan Mollick’s book from 2023 that he talked about that is really, really very helpful to recognize that reality. And you will benefit from understanding how that works. That means you are less likely to just think, “Hey, great output,” and off you go. You will know that: Yes, okay, I’ve done the verification; I’ve checked all those links; I now need to go further into this to look at it from a “Will they understand this?” perspective. And you ask questions back of the AI system. I do that almost on a daily basis—maybe two or three times a week, actually—that I will use it to create something or summarize a report, and I will then go back with a bunch of questions: “When you said this, what did you mean by that? Have you got a source to cite what led you to think that?”
And I find that exceptionally useful in—this is my perception, of course—in strengthening my confidence that the AI isn’t like a raving loony that’s going to hallucinate and tell lies all the time, although I realize that they do that sometimes. And you’ve got to—not—it’s not a person you’re talking to. This is not anything other than a bit of software on a server somewhere that pattern-matches things. Let’s not get into that conversation because I find it very distracting. The important stuff to think about: communicators who recognize that are benefiting; those who don’t are suffering. That, in my opinion, is a strong place. Communicators generally who know about all this stuff can focus on helping educate other communicators on how to do this properly. So that to me seems a simple progress forward to do that. Like I said, there are books, there are publications, there are newsletters, there are articles—you name it—telling you about all of this.
Now, where do you go to find all these? Are you on your own totally to wade through God knows what online? No, there are places to help with that. I’ve got something in mind which I’ll talk about another time, I think, that will help that. And I think we are at a stage, notwithstanding the agentic AI that slags off a developer in public and you don’t know whether it’s true—more of that’s likely. But we’re at the stage where we are looking at the way AI tools like these are developing that go way beyond prompt engineering, as the phrase used to be. You don’t need the level of detail in many prompts—not saying all—because the general rule applies: it depends on what you’re doing; that the more detail you provide might be actually beneficial in the output you’ll get from the chatbot. But the simple, plain-English conversation you have, which I use a lot, is usually good enough. And it’s a bit like that 80% rule, you know—it’s always 80%; that’s good enough. We can live with that, depending totally on what it is that you want and what you’re doing. So we’re at that stage where there is so much to see and read online about this that it’s hard to know where on earth you would start, and that’s a key thing we need to help other communicators understand: How do you start? We have solutions to help you do that.
Thanks a lot, Dan. That was a really comprehensive report. You packed a lot into that report. I got a couple of things I wanted to mention to you. It’s really interesting what you said about BlueSky and commenting, and indeed, the clamor for an edit button. Boy, does that remind us of Twitter, does it not back in the day? People want an edit button. But you mentioned some of the technicalities in why that’s a major issue with the protocol that is problematic from a technical point of view. And I get this is technical. But my question is this: How has Threads managed to do this without any problems at all? Because Threads is also connected with—the—runs on a protocol, let’s say, the same as BlueSky’s that enables you to share stuff to the—to the Fediverse, but you can edit a comment on Threads. I think you’ve got 15 minutes before that—that expires; you can’t do it anymore. And I do that quite a bit. I’m forever—you know, for instance, when I share posts about the next For Immediate Release episode, I usually forget to either include the URL or even add your handle to the post, so there’s a quick post, “damn,” I go back in again and correct it. So I find that quite useful from that point of view. So how come they’re doing it then without any issues or have there been issues that I just don’t know about? That’s my question on that one.
The other one is really interesting about WordPress. I’ve been following that too. I don’t use WordPress actively anymore—not for over a year now—although I still maintain my archive. So I’m in the back end quite a bit now, updating stuff and so forth. But interesting what you said—I was wondering, I read in—I think it was TechCrunch recently—that the hosted WordPress, that’s WordPress.com, has just launched an AI assistant that lets you literally build your site with voice prompts and drag and drop across the screen, asking the AI assistant to complete the task. Now that to me seems a huge step forward in using that. I wish that would come to Ghost, which is where I am now. But I think it’s surely an evolutionary step that is definitely going to come. I’m curious what you think about that, Dan. But the overall picture on WordPress, though, is pretty interesting. So thanks for including that.
So next story—this is the first of our non-AI stories. So you take a breath, right, take a breather from AI for a bit. This is about the—back in January in For Immediate Release 496, one of our midweek episodes, we talked about the PRCA, that’s the Public Relations and Communication Association, their move to redefine public relations. The organization proposed a new definition that positions PR as a strategic management discipline.
Shel Holtz: First of two.
Neville Hobson: Concerned with trust, legitimacy, volatility, and long-term value creation. It’s ambitious. It’s modern. It clearly aims to elevate the profession. But since then, the reaction’s been rather muted, from what I can see. There hasn’t been a groundswell of endorsement across the wider communication landscape. Okay, so they published this specifically asking PRCA members to comment on it. So if you weren’t a member, you couldn’t access the part of the website where you could leave comments. On LinkedIn, various posts—much of the commentary feels polite, even respectful, but not energized.
So let’s hear the PRCA’s new definition. And this is the portable one, I suppose you’d call it: “Public relations is the strategic management discipline that builds trust, enhances reputation, and helps leaders interpret complexity and manage volatility.”
Shel Holtz: The executive summary.
Neville Hobson: “Delivering measurable outcomes, including stakeholder confidence, long-term value creation, and commercial growth.” Now, I’ve had some anecdotal comments I’ve seen—it’s like, “Wow, that’s a mouthful.” Interesting. But I had to take a breath in that one single sentence, by the way, to complete it. So I read a really interesting post by Helen Dunne in Corporate Affairs Unpacked, where she says she showed the definition to several senior communicators. Their reactions ranged from “word salad” to “corporate buzzwords” to the rather weary “I’m too old for this.” I like that one.
Her bigger concern though isn’t the language; it’s representation. She argues that the definition doesn’t reflect the broader industry. The PRCA represents agencies. Many of those agencies are focused on branding, marketing, media relations, creative services. Only a small proportion of practitioners would describe their work as helping leaders interpret complexity at the strategic management level.
Shel Holtz: Ha.
Neville Hobson: Helen cites PRCA’s own state-of-the-sector data, which says 15% are in branding and marketing, 13% in communication strategy, 12% in corporate PR, and only 3% in reputation management. So that data undercuts the elevated framing, she says. So is the PRCA describing what PR is or what it wishes it to be? In my own post on this, which I did last week, I argued that the idea of redefining PR is worthy. But unless the CIPR, PRSA, IPRA, IABC, and others move in the same direction, we simply add another definition to a growing list, which raises a deeper question: Are we trying to clarify the profession or to rebrand it? If every major industry association defines public relations differently—and they do, frankly, even though some look similar—is the real issue the wording or the fact that we’ve never agreed what business we’re actually in?
Shel Holtz: After we reported on this, I was thinking that if anybody is going to succeed in pushing a new definition of PR that is widely adopted, it would be the Global Alliance. Because if the Global Alliance pushes it, all of their member associations, like PRSA and IABC and all the rest, are more likely to adopt it, or at least be aware of it. I don’t know what kind of influence PRCA has to push this, but if you open any public relations textbook, you’re going to find that author’s or those authors’ definitions of PR. You’re going to find a different definition in every PR association.
The one thing that troubles me about PRCA’s definition is that it says nothing about the relations that we have with stakeholder groups. And it’s right there in the—the name of the profession. Public relations is about managing the relationships, the relations, between an organization and its stakeholders. And that’s absent from the definition. In fact, I wouldn’t know from the definition that it had anything to do with all those stakeholders and the way the company interacts with them or the organization interacts with them. That said, I find the reactions that you have collected to be interesting, notably for their lack of enthusiasm and excitement. I certainly credit PRCA for undertaking this. I think it is a worthwhile discussion to have, but it really doesn’t seem like it’s going anywhere, does it?
Neville Hobson: Well, it’s interesting. I mean, you mentioned the Global Alliance. I wrote about that in my post last week—that they’re well-placed to, let’s say, convene all the major associations, if such a thing were even possible, to arrive at a single, concise definition supported by shared principles—that part of their stated mission is to unify the public relations profession. So wouldn’t that be a good place to start? It wouldn’t be easy; consensus-building rarely is, I said in my post, really. But if unification is the goal, agreeing how we define ourselves would seem a logical place to start.
I think the PRCA, like you said, Shel, I think they have taken a really good move to address the topic. The definition currently stems from 30 years ago—it’s been tweaked in between times—when it was all about press releases and media relations and things like that. This effort from PRCA brings it up to date. It’s a much more contemporary definition that is more in tune with what communicators do. Yet, like you said, there’s been little enthusiasm for it. And in fact, it reminds me—I saw a post on LinkedIn recently, I can’t remember what it was, where someone had done a word cloud of descriptions from, I guess, a dozen PR firms of what they say they do. Lots of words in there; “public relations” isn’t mentioned at all.
So are we at the point where we don’t know what the business is that we’re doing? Should it broaden out that discussion more widely? I don’t think PRCA is the organization to do that. Something like the Global Alliance is much more well-placed, I believe. Now, I’ve not seen them commenting on this. I’ve not actually seen any of the acronym soup I put in my post—CIPR, PRSA, IPRA, IABC—commenting on this at all. That speaks a lot, I think—that no one is commenting about it. And the comments I have seen, as you mentioned, don’t really exert much enthusiasm. Jerry Corbett, a good friend of ours who used to be, I think, the president of PRSA in America…
Shel Holtz: He was.
Neville Hobson: …did comment, and he talks about: this is way too long, still needs to be simplified. It needs to talk about relations like you just mentioned. The last time this topic was addressed in a meaningful way that embraced other associations and gained a lot of traction—if nothing eventually, ultimately happened—was in 2011, 2012 when the PRSA proposed a new definition. Now they offered it to everyone saying, “What do you think of this?” It wasn’t just the members of PRSA, which I think was the smarter move, frankly. A lot of debate happened. Others on the extremes like the Arthur Page Society and others were involved as well in commenting on this. So it was widely embracing. Yeah, ultimately nothing happened. So there wasn’t enthusiasm—a lot of opinion, but it ultimately didn’t go anywhere.
So here we are 15, 16 years later. Now it’s coming up again. The cynical view—and I’ve seen some people commenting on this—is that about every decade, the industry goes through all this: “We need to redefine the definition,” and nothing happens. That’s a bit of a cynical view. Will this be different? Well, PRCA has done a good job in taking a very first step that has generated some response, even without much enthusiasm. Can it go anywhere? I guess we will see in time.
Shel Holtz: We will see, but I have to say that I am skeptical, doubtful that even if they adopt it, I don’t see it being widely embraced by the entire public relations and communications community. I think part of the problem is it’s still hard to define public relations as a profession when anybody—as I have said 50,000 times on this show and elsewhere—anybody can hang out a shingle and say, “I am a public relations practitioner,” and they abide by none of the principles, none of the best practices, and none of the models. They engage in unethical behavior just to get to that final result that a client is interested in. And until we can coalesce around the idea of being a profession with a shared set of principles and a shared set of values and a shared set of frameworks and, you know, behave like a profession… Think about accounting. Think about law. Think about medicine. Think about engineering. These are professions where there are certain assumptions that wherever you are in the world and whatever level you’re at—whether you’re with a consulting firm or a corporation or you’re an independent consultant—you all agree to these things.
The communication/public relations industry is nowhere near that. I know the Global Communication Certification Council aims to change that, but that’s a long way off. Still in the process of separating from IABC; the idea being that other associations are not going to adopt IABC certification, but if it’s an independent certification, they certainly might.
Neville Hobson:
Shel Holtz: But the more people who seek and obtain certification, regardless of the association they belong to, the more likely the profession will be to coalesce around those guiding principles. So that’s my wild dream, but we’re nowhere near that right now. And even as I say, if PRCA settles on this definition, I don’t see it being widely adopted elsewhere.
Neville Hobson: No, if it’s just the members settling on it, then I can’t say it’ll just be another one amongst the things. If you Google “define PR,” as I did on a number of times—typing on a machine where I’m not logged in so it’s a clean search—it pulls up at least a dozen different definitions. Indeed, all the professional bodies say something slightly different. So this will just be another one. It may get picked up by some, but I can see greater confusion. You start using this and someone else who reads your stuff or is involved with you in some way just kind of Googles “defined PR,” they get something entirely different. So which is it then? You’re saying it’s this and these guys are saying it’s that—so it doesn’t help.
Shel Holtz: Well, collect every definition from every association and from every textbook and from every agency and feed them all to Claude or ChatGPT and say, “Create a single definition that accounts for everything that you see here.” See what it comes up with.
Neville Hobson: Well, you could do the whole thing end to end. The AI system does the whole thing, does the research, and then—that could be a good start.
Shel Holtz: Of course, you would use the AI to do the research too. Good exercise. Well, here’s the headline from a Substack post Paul Ferbredi published recently: “I bet you couldn’t show the ROI of your corporate podcast if your job depended on it.” That line isn’t just provocative; it highlights a real challenge many of us in organizational communication face as audio content increasingly becomes part of the mix. Ferbredi’s key point—echoed in the comments that were left on his piece—is that too many corporate podcasts are, frankly, vanity projects. People launch them because everyone’s doing a podcast or because executives think their voice should be heard. But they’re not always clear about what the podcast is supposed to achieve. Back to that whole idea of strategic planning. And if you don’t define success clearly, then yeah, proving ROI is nearly impossible.
So let’s unpack that a bit. One of the problems is that we often measure the wrong things. We fall back on downloads, subscriber counts, chart rankings—all output metrics that tell you how many people pressed play, but almost nothing about what that listening meant for the business. That’s why critics like Paul call ROI “unshowable,” because too often we’re not measuring in ways that link back to business outcomes. But here’s the nuance: it is possible to measure ROI if you define it differently at the beginning and tie it to concrete goals. According to frameworks in the B2B podcast space, traditional vanity metrics like downloads or rankings simply don’t cut it, especially in the B2B world. What matters is whether episodes generate pipeline influence, lead opportunities, and business impact that your CFO can understand. That means integrating your podcast data into your customer relationship management and tracking things like listener engagement that turns into demo requests or sales conversations.
Put differently, ROI for a branded or corporate podcast isn’t just a ratio of dollars spent versus dollars earned in direct revenue. Some of the most valuable returns are indirect. And I would argue that means we need a different label than ROI, which is the ratio of dollars spent to dollars earned. Brand awareness, trust, thought leadership, deeper audience relationships—these are the kinds of outcomes that support recruitment, retention, stakeholder alignment, even executive visibility. Agencies and analytics platforms remind us that these outcomes are real. They just aren’t easily captured by simple metrics, and certainly not as ROI.
Experts also point to sophisticated ways of measuring impact—things like brand lift studies, pixel attribution, long-term tracking of customer behavior. These techniques compare people exposed to the podcast with a control group or follow listeners through the customer journey to see if they visit your website and engage further or convert into customers. That gives you measurable evidence that listening isn’t just passive noise; it’s influencing the business. And importantly, not all podcasts are trying to directly generate sales. Some are designed to build relationships with potential customers, with internal audiences, with partners. If your podcast goal is to deepen customer trust or make your brand more visible in your ecosystem, then your ROI framework has to reflect that. Clear goal-setting upfront before the microphone is ever turned on is what’s most important.
So what do we take away from Paul’s challenge? First, he’s right that many corporate podcasts fail ROI tests, but mostly because they aren’t giving themselves a fighting chance to succeed. ROI isn’t inherent to a podcast; it’s a function of how you define your goals, how you measure your outcomes, and how you connect the dots between listening and real-world results. When we treat podcasts as strategic channels with measurable outcomes—not just vanity projects—we not only can show ROI, we can use the ROI to make better decisions. To summarize this: podcasts can have measurable ROI, but only when we stop obsessing over downloads and start thinking in terms of business impact.
Neville Hobson: Yeah, you’re absolutely right to that conclusion. It’s a really good piece Paul wrote, I think. Even though I have to say his rationale is comparing with text—isn’t text better than audio? So set that aside though, because the strength of his analysis is really, really well done. My experience in B2B podcasting, which I’ve done for a client for some time recently, it rings bells, this, because it is all about the goals. Yet the obsession has always been—from way back; it’s probably diminished quite a bit now—”How many downloads do we get? What does Apple Podcasts say?” And then you get kind of down rabbit holes when you look at the analytics reports about all the—which delivered the clicks to your podcast site—you’re then into serious eye-glazing territory unless you’re the techie who needs to know that kind of stuff.
I think the goals are key, absolutely key. And you made a very good point that it’s not always just about ROI, meaning money, the return on the investment. How many leads does it generate that lead to sales, perhaps? Although having a podcast that is a lead generator, that’s great. There’s a goal when you say, “We want this episode to deliver us 16 inquiries about a widget that we’re selling”, in which case the whole chain of that has got to be well thought through. Not good enough just to stick your podcast up there and have a link on a podcast page on your website. You’ve got to have, when they click to go to your site, get to the landing page—what happens? How do you track that? And enterprise firms particularly have access to really effective tools that kind of map and track the end-to-end journey or visits to the sites, where they came from, who they are—particularly if they’ve identified themselves or they’re existing customers. So all that’s got to be part of your structure.
I think I had a conversation with someone about six months ago about starting a business podcast. And I’m getting déjà vu just reflecting on part of that conversation where a goal was literally a by-the-way at the very end—where it emerged from this person that they had a goal of what it was. And I remember thinking at the time that a podcast is not what they should be using to achieve that goal. So you’ve got to—the right goal. Yet I also recognize that vanity projects—yeah, there’s not much you can do, I suppose, if the person you’re talking to is convinced he or she wants to do this no matter what, that’s a vanity project. The question I would ask as a communicator is: Do you want to get involved in something like that, no matter what the theme might be? Podcasting is in a different place than it was even five years ago, I would argue, in that most people I talk to now think of video first, not audio. And we do a video of our audio conversation. We don’t do much with the video; I stick it up there on YouTube. So if you want to look at two talking heads on screen, you can.
Shel Holtz: Well, yeah, the video gets recorded whether we want it to or not. So we might as well use it for people who prefer to get it that way.
Neville Hobson: Right. We might as well use it. Exactly. You can see our facial expressions. When I go like that, you can see that. But I think this is worth reading, Paul’s post. The thought in your mind if you’re thinking about a podcast is: start with the goal first. Don’t think about how many downloads you get and how you’re going to be like Joe Rogan. I often think those comparisons—when people say, “Joe Rogan’s podcast got 65 million downloads”—talk about stuff that’s completely irrelevant to what you’re likely to achieve with a B2B podcast. Let’s actually go. Which is you better have big budget.
Shel Holtz: Yeah, and there are goals that you can assign to a podcast that have nothing to do with ROI, nothing at all. It could be that you are trying to change the perception of your organization: “We’re not a stodgy organization. We have that reputation. We need to change it. Let’s get a fun, loose podcast out there so that it starts to move the needle in the other direction—that this would be a fun place maybe to come work”. There are podcasts that are aimed at attracting new recruits to the organization.
Neville Hobson: Right, we mentioned that, yeah.
Shel Holtz: There are podcasts that are aimed at promoting thought leadership. And of course you need to know what your goal for thought leadership is, but none of these are going to be directly tied to new revenue. That would be really, really hard to do.
Neville Hobson: You tie it to other goals that you could measure. So you’ve got to have that. Yeah.
Shel Holtz: Exactly. And you can measure that as long as you know what it is at the point where you start. You mentioned that Paul did make the point: isn’t text better? When we started this podcast, when there were about 400 podcasts, most podcasts talked about podcasting. That was the theme. Every podcast was, “Let’s talk about podcasting”. And there was a lot of conversation back then about why audio is better. And I mean, there were some critics. I remember one person said, “I can read five articles in the time it takes to listen to one podcast”. But my answer was, “Yeah, but I can’t read any articles when I’m driving my car.” But I can listen to a podcast for me, audio—and this is not true of video, by the way—audio is the only form of media that’s available to us that people can pay attention to when they’re doing something else, whether it’s folding laundry or working out or walking the dog or driving somewhere or mowing the lawn—whatever it might be, you can listen and absorb information. You can’t read; you can’t watch a video.
God help me if I ever see anybody driving and watching a video at the same time. I actually did see that. I saw somebody had their phone on the car and he had a video playing. It wasn’t the road ahead of him or behind him; it was a TV show or something. And I went, “My God,” I mean, that’s worse than being on your phone. But I continue to maintain that that is true: the value of audio is the ability to listen when you’re doing something else. And there’s also been studies about emotion from hearing somebody’s voice—that you’re able to connect with that much more quickly than reading a quote. Where this is leading me is that if you are going down the road of producing a podcast, know why that format is of value to you. Why is that the approach to take in terms of the goal that you’re trying to achieve? Is that emotional connection important? Are you trying to reach an audience that has limited time and may listen to your show when they’re doing something else?
Finally, a podcast could be part of a larger campaign. It can be just one element. Could be it’s the audio version of something that we are producing for people who aren’t going to partake of another element of this that was produced. I am wrapping up work on a book that—the proposal is almost ready to go. There is an agent waiting to look at it. It’s probably going to get published. When it’s published, the proposal calls for there to be a Substack-like newsletter to go along with this and a new podcast that I am going to be launching with Steve Crescenzo on internal communications right here on the For Immediate Release Podcast Network. It’s just one element, but the main piece is the book, right? It’s—it’s not the podcast. The podcast is supporting.
And one more thing is that you talk about podcasting being in a very different place today than it was five years ago. One of the things that defines that is the fact that you are now seeing news made based on what somebody says on a podcast. It’s no longer what they said on an interview show or in a speech—well, it is—but in addition to that now, on a podcast where he was interviewed, this politician said this or this business leader said that. So that might be another reason that you want to podcast is as a way to get these quotes out there that might get picked up elsewhere and make news. So I think shoehorning podcasting into this one ROI bucket is a mistake. And yet Paul is absolutely right: his bottom-line conclusion, which is you’d better know what it is you’re trying to achieve with this before you push that record button.
Neville Hobson: Yeah, that’s the bottom line. Absolutely right. So goal-setting is key. Start with that, not how many downloads you expect and can you arrive with Joe Rogan or whatever it might be. So good stuff, that, I have to say. Okay, so our final story today—we’re back to the AI topic. Question: Are chatbots—are chatbots the new influencers?
Shel Holtz: Everything that goes around comes around.
Neville Hobson: For the past two decades, digital marketing has largely been about visibility. First it was banner ads, then search, then social, then influencer marketing. Each wave brought new tools, new behaviors, and new anxieties. Now, according to a recent New York Times piece, we’ve entered another phase: chatbots are the new influencers and brands have to woo the robots. The article describes how companies are discovering that when customers ask ChatGPT, Gemini, or Claude, for example, about a product or provider, the answer that comes back may not reflect what the company believes about itself. In one example, a healthcare software firm asked chatbots about its own offerings and found outdated, incomplete, and sometimes misleading information being surfaced. That moment triggered a realization: if AI models are shaping how people consume information, then influencing those models becomes part of marketing strategy.
This has been framed as the next evolution of SEO, says the New York Times. Except now it has a new acronym: AEO (Answer Engine Optimization) or GEO (Generative Engine Optimization)—a topic we discussed last September in a For Immediate Release interview with Stephanie Grober at the Horowitz Agency in New York. Great conversation that was. Instead of trying to rank on page one of Google, brands are now trying to influence how large language models synthesize and present information in response to prompts. That changes the game. Chatbots don’t care about vibe, emotional resonance, or brand storytelling. They prioritize clarity, structured detail, and volume. Some brands are flooding the zone with highly targeted content. Others are obsessively auditing Reddit because Reddit turns out to be one of the most cited sources in AI-generated answers. In effect, the brand is no longer competing only for human attention; it is competing for algorithmic interpretation.
That’s actually well said there, Shel. We talked about this very topic at least twice in the last six months of last year. Not only humans; you’ve got to look at the bots as well. I think that introduces a deeper shift. Historically, search engines pointed users to sources. Chatbots increasingly summarize, recommend, and decide what is worth mentioning. The intermediary is no longer neutral. It synthesizes, which means the battleground for reputation is moving upstream from persuasion to data conditioning.
But here’s the counterpoint. We’ve been here before and we’ve discussed it in this podcast before. Every major digital shift has been framed as existential. SEO was supposed to change everything, then social algorithms, then influencer marketing. Each time an optimization industry sprang up, each time brands flooded the zone with content, and each time the platforms evolved in response. So the question is: Is this genuinely a structural shift in how reputation is constructed, or simply the next optimization cycle dressed up as revolution? Because there’s a real risk here. If brands begin producing vast volumes of content purely to influence AI outputs, do we elevate substance or do we accelerate a new kind of synthetic noise? Could be all that AI slop we’ve been hearing about a lot recently. And if Reddit posts and forum threads are disproportionately shaping chatbot answers, are we witnessing democratization of influence or amplification of unverified commentary? So are chatbots truly the new influencers we must court, or are we watching the early stages of another marketing arms race that may look very different once the models mature? What do you think, Shel?
Shel Holtz: It’s a fraught topic. I mean, first of all, as organizations trip over themselves to figure out how to appear in AI query responses and appear the way they want to, is that going to taint AI responses to the point that they’re no better than a Google search response? I mean, you remember the original Google where you typed in a query and you got 10 items that were directly related to what you were interested in. And now you have to wade through the ads and the other crap that populates the Google search results before you get to anything that’s even remotely relevant.
Neville Hobson: Yeah. Slop is the word, not crap, slop.
Shel Holtz: Yeah. Okay, yeah. But you have some other issues here. We hear that Reddit figures prominently in the results. And then you hear from somebody else: No, no, no, it’s earned media that is prompting what gets injected into the responses to queries in the large language models. I just saw—this was just published on February 18th—a study that found 44% of ChatGPT citations come from the first third of whatever content it was that they found. So, you know, do you top-load your content with the main information that you want the AI models to grasp, even if that’s not necessarily the way you want people to read the content that you’re producing?
And each model does something different. The fact that ChatGPT citations come from the first third of content doesn’t mean that Claude’s do or Gemini’s or Grok’s. And then every time they release a new model, has it changed? So I think we could be chasing our tails with this kind of information. Are chatbots the new influencer? Well, they’re a new influencer. Certainly people are getting information from these—I do. I say, “This product isn’t working for me. What are the alternatives?” And it tells me, and I’m sure it’s leaving out good products that just haven’t got their information into the places where it’s going to be absorbed by an AI being trained on this content or searching.
So, you know. I think we just need to produce good content that answers questions. I—we talked about this a couple of months ago. When you look at the tools that are being implemented in the enterprise, employees are no longer reading the articles that we produce that say, “Here’s the justification and the context and the background for the change that the organization’s going through.” They type a query and they get a reply. Where’s that reply coming from? It’s not coming from the context that we provided unless we top-load, front-load the content with that answer in order to accommodate the chatbot. Is that what we want? This is probably a time to be rethinking the way we communicate altogether because of this situation. But I think creating good content that does a good job of answering questions, that puts the main information at the top…
Neville Hobson:
Shel Holtz: I mean, you know, somebody ought to invent an inverted pyramid style of writing that starts with the who, when, where, why before you get to the, you know, the detail. Just do good content and you’ll be fine.
Neville Hobson: That’s a good tip, I think. To me, just seems like everything is so manipulated. I was thinking this the other day, something I was searching for online, and I looked at what Google produced. Because Google, by the way, really has improved hugely in the last six months in terms of what it actually offers you when you do a search term. The AI generates a summary of the top result that comes, the citations that it includes that you can click on if you want. My experience is, I often find that that summary is good enough for what I need. I might scroll down to see who else is saying what. And then you’ve got little drop-downs of other responses to that search term. Great. And it usually gives me what I want.
But basically, I’m thinking when I see stuff like this: the manipulation is huge. Would it not be simpler if we just ditched all this stuff? No, that’s not the answer. The world’s moved on. We have to live with this. But it makes it difficult to trust anything the way you used to be able to. So do I trust this answer either because it’s—Google is giving it to me, therefore implies I trust Google? Or is it because it looks about right, that’s what I’m looking for? So I trust the responder to that answer? I don’t know. You have to make your own judgment call on this because if you’re using another search engine, it’s going to be very different.
Shel Holtz: Yeah.
Neville Hobson: If you use your chatbot—and that’s actually quite interesting because whether it’s ChatGPT, whether it’s Claude, whatever it might be, using your chatbot, not a search engine—how do you feel about that? Do you implicitly trust the chatbot and what it’s telling you? Would it be different than what Google would tell you if you did a Google search? Probably yes. Not in terms of meaning, but the words are going to be different, obviously, and maybe the sources will be different. So if you need to do that, fine. I don’t think you do typically need to do that. You just go to Google or whatever it might be that you’re accustomed to, that you trust, search and get your answer.
But you’ve now really got to—and particularly in light of the story we talked about earlier about the developer who was stitched up by an AI agent that damages reputation—that kind of content might show up in search results too. So this is the landscape we’re in now. You have to get used to it.
Shel Holtz: I still find the the top 10 blue links on the first search engine results page from Google are far less valuable than they used to be. I still find that the first three or four are paid and irrelevant or… I see it all the time.
Neville Hobson: I don’t see that. I don’t see that at all. I don’t see paid at all in the first results. I see it a little further down. Yeah, okay, interesting. Maybe it’s different in here. I’m doing google.co.uk, not google.com. So maybe there’s a difference. Yeah.
Shel Holtz: I definitely do. Listeners, what are you seeing on Google? I’ve been using Perplexity. Are you logged in to Google when you’re doing this? Okay. I have been using Perplexity more and more because I’m able to refine my search, saying, “I’m looking for this, not this, and I need it from articles that have been published in the last six months.” And it does an excellent job of providing me with great results. Now I haven’t compared it to what Google would give me, but I have to believe that it’s more relevant because it is trying to satisfy me rather than satisfy the advertisers who have paid to have their links promoted on Google.
Neville Hobson: Yeah, yeah, typically. Okay. It’s funny. I mean, I’ve just done a search on Google right now. There’s not a single sponsored link in my list at all. Not one. And I do see them occasionally, but they’re kind of halfway down; it says “sponsored”. I’m not seeing any for this search term I just searched on. So I’m scrolling further down the page—I’m not seeing any. Results are personalized. Try it without personalization; maybe that might make a difference. But so I’m quite happy with what I see from this. I see in this particular example…
Shel Holtz: Hmm.
Neville Hobson: …it gives me the text upfront, as you know, “to see more”. That will tell me more about that. Again, scrolling down the page, don’t see anything that’s saying sponsored, which is what you normally do see. I don’t know. But I mean, the point is, I think you need to determine yourself: Do you trust what it’s telling you? Are you happy with that result, whether it’s search at Google or whether it’s your favorite chatbot? I was using Perplexity a lot, Shel. I really was. I stopped using it entirely. I didn’t like what it was doing. I didn’t like it at all. Yeah. But I have to tell you, I stopped flipping from one tool to another to see. No, I stick with what I like, what I know works for me. And I don’t bother trying to second-guess it. But let me see what Gemini says about this. Although I do that occasionally, I have to say.
Shel Holtz: I had stopped for a while and I’ve gone back to it. It’s improved. It has improved considerably in the last couple of months.
Neville Hobson: I did a research project about two weeks ago where I did spend time trawling different tools and getting complementary or different results. I then had one of those—ChatGPT—summarize it all. But hey, it’s a lot of work and I didn’t need to do that. So I’m not going to do that as a matter of course.
Shel Holtz: I did. I, on our intranet, have a “construction term of the week”. This has been going on for about six and a half years. Every week, a new definition of a new term. I’ve gone through everything that has been provided to me. So now I’m asking an AI: “Give me a list of 20 construction-related terms.” And I’ll get more specific than that. I’ll say, “around water infrastructure projects” or things like this. And I’ll say, “Okay, I like this one. Give me a two-paragraph definition of that.” I’ll copy and paste that definition and go to one of the other LLMs and I’ll say, “Assess this for accuracy, list what you would change and then rewrite it the way you would rewrite it to incorporate your corrections.” And I find that that gets me a much better definition. So I’m frequently bouncing around to these.
I also find that I’ll switch which tool I’m using the most based on who’s released the best model most recently because I find the latest Claude model is just amazing, but then Gemini just released a new one that apparently is blowing Claude away. I want to use the one that’s going to give me the best results, not the one I’m most comfortable with. So I’m changing all the time.
Neville Hobson: Yeah, I find the one I’m most comfortable with is the one that gives me the best results—that I’m very happy with that—but again, our uses are very different. I don’t use it for the kind of stuff that you do when you talk about “You hear these definitions, give me a summary and find the best one” or whatever. I tend not to do that kind of work. But I’m very happy with ChatGPT Plus that I’ve been using for a while now. I use NotebookLM occasionally, particularly when I’m looking at dense academic reports. But I’m kind of OK with that. So the point is, I think—to summarize all of this—that our chatbots are new influencers. I think the New York Times piece is a good piece. It’s a thought-provoking piece. And I think the caveats, as I saw them certainly, are the risk factor that we just spent a while discussing. I think the idea—as the writer mentioned in the Times piece—if Reddit posts are disproportionately shaping chatbot answers, are we witnessing the amplification of unverified content? I think that’s a very good point to make. Hence, even more so—and I don’t know how we’d feel comfortable with this—you’ve got to verify everything.
I do that. And I find, depending on what it is… I can’t think of a good example, frankly, Shel, but you know… you’ve spent some time, a little bit of time telling your AI system what you want it to do. You might have had a to-and-fro, back-and-forth conversation about that. That’s common for me. Not just “Here’s a prompt and off you go and do it”—no. And it comes back with something; I say, “Fine, what do you mean by this?” or “I want you to do that as well.” Yes, that’s good to highlight that. That goes on all the time. And then the checking of things takes longer than that. And I’m totally OK with that. Because I need to—and this must apply to everyone—I need to be sure… Or maybe it doesn’t. Maybe it doesn’t apply to folks who work in Deloitte. Sorry, I shouldn’t have said that, but it occurs to me.
You need to check it for your own peace of mind—that what you’re sharing with the other person, whether it’s a client or a colleague, is accurate to your best knowledge—that there’s nothing you’ve done that would diminish the accuracy of that or anything you haven’t done, meaning not verified or checked everything. So—and like you said earlier in our early discussion about this, there’s a lot more to this than just verifying. Yeah, I get that too. But it takes time. And maybe that’s why people don’t do it. They see the folks who do it this way see it as the easy tool to improve their—to dump all this stuff on the chatbot so they can either take the day off or do other things. I mean, that’s—I don’t believe that’s everywhere. But some people will think that. So it is a tricky one to answer. And I think that we just got to do what we’re comfortable with that meets our objectives and take as much care as possible in producing the best work we can.
Shel Holtz: Yeah, and for communicators, recognizing that chatbots are a new influencer means that we have to think about how we take advantage of that. And I’m going to emphasize again: they are a new influencer, not the new influencer. Kim Kardashian has not hung her head in shame and retreated into a dark room to wait to die. She still has millions and millions of followers and holds up a product and it drives sales. You know, the—the old influencers haven’t gone anywhere and still warrant some attention.
Neville Hobson: Well, true. So the Times, though, says—the question they asked is: Are chatbots the new influencers? So our answer to that would be: No, they are one of the new influencers.
Shel Holtz: Right, yeah. No. Yes, add them to the mix. So that’ll wrap up this episode of For Immediate Release, episode number 502, our long form episode for February. We do hope that you will comment on this. All of our comments these days come from our LinkedIn posts. So check LinkedIn, follow either one of us, but we also share these posts on Facebook in three places: we have a For Immediate Release Podcast Network community and a For Immediate Release page, in addition to you and I sharing them individually. We’re also on Threads and BlueSky. Leave a comment. Any of those places, we’ll pick it up and share it in the March long form episode.
You can also send us an email to [email protected]. You can attach an audio file. You can record that audio file directly from the For Immediate Release Podcast Network website—there’s a “send voicemail” tab over on the right-hand side. I actually got a voicemail from the website last month, but it was just somebody being obscene. It had nothing to do with communication, but I got excited. We got one of those from Speakpipe, which is the vendor who does that. You can leave a comment directly in the show notes. I mean, it is a blog. There’s a place to put comments in a blog. Go figure.
Neville Hobson: Wow, should have played it. An obscene phone call, okay.
Shel Holtz: All these ways to comment, please do and be part of this conversation. And our next long form episode will be recorded on Saturday, March 21st. We will drop that on Monday, March 23rd. Until then, that will be a “30” for For Immediate Release.
The post FIR #502: Attack of the AI Agent! appeared first on FIR Podcast Network.
AI isn’t replacing communicators — it’s amplifying the value of communication, especially storytelling and strategic writing. In this short, midweek FIR episode, Neville and Shel explore how the hottest jobs in tech are increasingly about telling stories, not writing code, with Netflix, Microsoft, Adobe, Anthropic, and OpenAI all hiring communications and storytelling teams at salaries ranging from six figures up to $775,000 per year. Even AI labs themselves are posting compensation packages around $400K for storytelling and communications roles, signaling that they understand the irreplaceable human value of meaning-making in an age of automated content generation.
The distinction Neville and Shel highlight between traditional messaging and true storytelling proves critical: conventional communications start with what the brand wants to say, while storytelling starts with what audiences actually care about. The strongest communicators will be those who move beyond prescriptive messaging to tell genuine human stories.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, February 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson: Hi everyone and welcome to For Immediate Release. This is episode 501. I’m Neville Hobson.
Shel Holtz: I’m Shel Holtz. And here’s some good news for communicators. Artificial intelligence isn’t replacing us, it’s amplifying the value of communication itself, especially storytelling and strategic writing. If you’ve been feeling that AI spells doom for writers and communicators, the labor market is telling a very different story. We’ll tell you that story right after this. Let’s start with something concrete. The hottest jobs in tech right now aren’t about writing code or managing data. They’re about telling clear, compelling human stories. Recent hiring trends show that giants like Netflix, Microsoft, Adobe, Anthropic, and OpenAI are aggressively expanding communications and storytelling teams with roles offering from six figures up to as much as $775,000 a year for senior leadership positions without any requirement to write a line of code. Why? Because AI has flooded the internet with cheap automated output, what some observers are calling slopaganda. I love this word, slopaganda. Hadn’t heard it before I read that article, but millions of words get generated every minute. Most of it lacks clarity, insight, context, and meaning, exactly the things that real communicators deliver. Companies are recognizing that the ability to cut through that noise with strategic narrative creates trust, authority, and differentiation in the market. Even the AI labs themselves, including OpenAI and Anthropic, are willing to pay top dollar for storytellers. One analysis said that nearly $400,000 compensation packages are being posted specifically for storytelling and communications roles at these firms. exactly because humans excel at crafting nuanced messages that machines simply can’t. So here’s the underlying shift communicators need to understand. AI automates… AI automates tasks, but meaning making remains deeply human. Machines can generate text, but they don’t know which stories matter to whom or why. And we keep hearing communicators and writers venting on LinkedIn about machines lacking judgment, empathy, context, and strategic framing, all those hallmarks of great communication. That’s exactly what they’re looking for. And in an age of automated noise, those abilities create value. That’s a theme echoed across industry thinking.
Shel Holtz: That’s a theme echoed across industry thinking. A Forbes piece on storytelling in the age of AI highlights that storytelling is one of the most powerful tools we have and one of the most powerful tools leaders have. It helps audiences remember facts wrapped in emotion, connect data to human experience, and anchor organizational vision in something people can feel and act on. Another Forbes analysis argues that storytelling isn’t just about communication, it’s also a career pathway. When individuals and organizations tell clear stories about evolving roles, skills, development, and future opportunities, they make the future feel navigable rather than threatening. This matters for internal communication too. HR and people leaders are increasingly using narrative to frame change and build resilience. When employees feel adrift and amid all the talk of AI disruption, a coherent story about how the organization is evolving and where people fit in. is one of the most effective ways to build trust and engagement. Even the broader hype narrative around AI’s impact on jobs, including viral essays, warning of sweeping automation, underscores this point. Some of the loudest voices talking about disruption are exactly those using storytelling to shape a narrative about the future. But the data so far suggests that the real impact of AI isn’t mass job elimination, it’s task transformation. with humans shifting into roles that emphasize strategy, creativity, judgment, and communication, exactly the space where we storytellers thrive. So for communicators who worry that AI might make them obsolete, here’s the reality. Your craft isn’t threatened, it’s elevated. AI makes routine work easier, but narrative leadership, strategic framing, and contextual clarity are becoming even more essential. The labor market isn’t pulling back its investment in communicators, it’s paying up for them because the ability to tell a clear human story is now a competitive advantage. With the world drowning in automated content, meaning is scarce. And communicators are the ones who turn noise into narrative, confusion into clarity, and information into influence. That’s not something AI replaces, it’s something only humans can do well. And that’s why even in an AI era, talented communicators are irreplaceable and more valuable than ever. And by the way, if the tech companies feel the need to cut through the noise created by all that slopaganda, I got to use that word again, other industries will figure out sooner or later that they need to as well.
Neville Hobson: Listening to what you’re saying there, Shel, what strikes me is how similar themes are now surfacing here in the UK. So the Times ran a piece recently about companies hiring chief storytellers, specifically to cut through what they and everyone calls AI slop. What’s interesting is that it isn’t framed as anti-AI, it’s framed as a response to saturation. When content becomes easy and abundant, meaning becomes scarce. Recruiters are saying demand for storytelling roles has doubled in the past year. and the way they define storytelling isn’t about clever copy. It’s about starting with what people care about rather than what the brand wants to say. There’s also a strong internal dimension, storytelling being used to align remote teams, break down silos and create shared culture. So I’m left wondering whether this chief storyteller trend is something genuinely new or whether we’re simply rediscovering the strategic craft of communication in an AI saturated environment. And finally, if AI makes it easier to generate content, does that mean communicators need to become curators of meaning rather than producers of material?
Shel Holtz: Interesting question. And I think that this is somewhat different. We have been telling stories, but I think you have to define what we mean by storytelling here, because we write stories that aren’t really stories. It’s just a term that we use as a synonym for article. I wrote a story the other day. Was it really a story or was it a communication piece?
Shel Holtz: There are so many stories that we could tell in the world of organizational communication that are really just prescriptive or a statement of fact. We’re getting the news out, but we don’t have a beginning, middle, and end. We certainly don’t have a protagonist. We’re not looking at Joseph Campbell’s hero’s journey and trying to figure out. how to apply that to the tales that we tell. There is a guy out there, Donald Miller, who’s got this thing called Story Brand, which is fascinating, that is designed to put your customer into the story as the hero and the company as the mentor or the guide who helps the hero achieve its goal through its journey. And I really like it. And there are free tools that you can use to map all this out for your brand or your product. But it gets us away from saying, isn’t this product great? Look how great it works and tell a genuine story instead. And I think this is why narrative and story rather than communications or public relations are the labels that are being attached to these job descriptions that are all over LinkedIn. When I saw the story, I went and looked and there are dozens and dozens of them. And the salaries are jaw dropping when you consider that the typical, you know, communication manager is making about 108,000 a year, according to one of these articles. you know, $400,000 with all benefits, with three days remote work, because I read these job descriptions. This is very encouraging for our profession. But if you’re the kind of communicator out there who writes these articles that just says we have an employee assistance program, It offers the following bulleted services. You should call if you have emotional or financial problems. That’s not what they’re looking for. They’re not looking for you. They’re looking for the guy who wrote that article that I’ve referenced 50 times on this podcast about the employee who was divorced and depressed and started drinking and gained 100 pounds and finally called the EAP when he hit rock bottom. And they worked with him to find something that really excited him and it turned out to be ballroom dancing and now he’s a national champion traveling around the world. He’s lost more than a hundred pounds. He’s quit smoking and drinking and all because of the EAP. Which of those two stories are you more likely to read? It’s absolutely the story of the guy who used the EAP. People can relate to that. People don’t even read the crap that says we have one and here’s what it offers. So I think cutting through the noise with genuine stories that tell the tale of what the organization is trying to convey, that’s what they’re looking for.
Neville Hobson: So interesting. So the title Chief Storyteller, that sounds new and fashionable, right? But when you unpack it, much of it looks like what strong communication leaders have always done. Alignment, translation, cohesion, behavioral framing opens up a richer debate, I think. Is this a genuine new C-suite function or a rebranding of strategic communication crafted in an AI era? It sounds a lot like the latter to me.
Shel Holtz: It sounds a lot like the latter, but I think there’s a bit of the former as well, because we’re talking about a transition of role. I think communicators who are employed right now want to start telling more stories if they want to keep their jobs, because if all you’re doing is writing the stuff that can be written by AI just by giving it the facts and say, turn this into an article, I think you’re toast. But if you can tell a genuine story that moves people, then your job is probably secure and you may be qualified to apply for one of these $400,000 a year jobs. I don’t think they’re going to hire the average communicator who’s doing a pretty good job at their organization, even if they’re at the C-suite level, if they can’t put together the kind of narrative that these companies are looking for. Certainly there are companies that are doing this and there are communicators in those companies that are doing this, but I don’t think it’s most. I think most are cranking out the typical content that is just conveying the news. And I think basic journalism, the who, what, when, where, why, if I can pop that into Claude or ChatGPT or Gemini, especially if I’ve trained it on my writing style, which I have, by the way, on Gemini, it’ll turn out a passable article that then you can edit in 15 minutes and be done. That’s not what they’re looking for. I think that they would argue that that probably is slopaganda. And… This is exactly the noise that they’re looking for somebody to help them cut through.
Neville Hobson: So one of the strongest lines in the times piece is the distinction between messaging and meaning. Traditional comms starts with what the brand wants to say, says the times. Storytelling starts with what people care about. That’s a strategic pivot, I would say. So messaging is output driven, meaning is audience driven. AI is good at output, humans are better at contextual meaning. So is that? Now we should be looking at this as a shift from messaging to meaning.
Shel Holtz: absolutely. I think that’s exactly what we’re talking about here. And the focus on the audience. And again, this is what Donald Miller’s story brand, who has paid us no consideration for the reference here, is exactly what he does. He puts the customer at the center of the company or the brand’s story. And I think that’s what’s different. I think that’s the transition or the pivot that communicators need to make. I don’t think it’s difficult. And if you haven’t… written fiction, I would suggest that you read about Joseph Campbell’s Hero’s Journey. There’s a wonderful book, I can’t remember the author’s name, but I’ve read it twice called The Writer’s Journey. he talks about, I mean, he’s focused on writing fiction, but he talks about how you apply the hero’s journey to things like Star Wars and The Wizard of Oz. And he has these tropes that everybody is familiar with. that he uses to explain how to write this way. And he tells you that every successful film in particular, and novels as well, uses this formula. And I read it twice because I really had to unpack it in a way that worked in organizational communication rather than novel and film writing. But it does, it works. And then I found Donald Miller in his story brand and I said, there it is right there. fill in, in the boxes, who is the mentor, who is the other characters that appear in this formula. And it’s well worth taking a look at and his book is worth reading as well. I’ll have a link to Story Brand in the show notes.
Neville Hobson: Yeah. So I’m just going to go through my mind thinking about where this conversation we’re having here. And if we look at the, which to me makes complete sense, and the Times article and the Business Decider piece, I think support this, that the shift is definitely from messaging to meaning, something we’ve talked about quite a bit. The Times piece talks about the the noise not being the problem, it’s indistinguishable noise. And that makes sense, that kind of metaphorical phrase that reminds me of conversations we’ve had before, which talks about what a communicator is being using artificial intelligence to enhance their abilities. So I’m just trying to see where the kind of path looks ahead for this. It seems to me that AI is going to play an even more significant role in the future for communicators who are shifting from messaging to meaning. And I must admit, I don’t believe that the scenario you painted earlier about the kind of, you know, the communications person who has… been right doing the stuff he or she’s been doing for years, that’s fine. Keep doing that because there’s a market view. I don’t think that’s true. I think AIs see the threat for those people. Yeah. So if AI is good at output, according to the kind of, what are the concluding points in the times piece, humans are better at contextual meaning. That surely is then what people are looking for to pay half a million bucks or whatever is a salary.
Shel Holtz: yeah, I agree with that.
Neville Hobson: to achieve storyteller. I think this huge confusion here and inserting into the picture the phrase chief storyteller, where it’s just a fancy job title basically, doesn’t help with this, it seems to me. it’s inevitable, I suppose you’re going to get that. as you said, I’ve seen it’s all over LinkedIn, that chief storyteller is an executive function. Yeah, but that’s not the right interpretation for that, don’t believe. So it doesn’t help clarify what the picture is here.
Shel Holtz: I don’t know. I would be very curious to look at the org charts of the companies that are seeking these positions to see if it is separate and distinct from the public relations or communications function. We talked several weeks ago about the proposed new definition of public relations, and it goes way beyond this. I’m thinking, and I don’t know this for a fact,
Shel Holtz: But I’m thinking that what these companies are doing is creating a new function that will live alongside and presumably under the same umbrella as the PR or corporate communications department, which is building relationships with key stakeholders. But the storytellers are out there creating the content that’s going to cut through the slop aimed at particular audiences who are ripe for this kind of storytelling. I I was about to say messaging, it’s… trying to get away from messaging. And the PR department will continue to do the earnings releases and the thought leadership and the negotiations with critics and all of the stuff that PR typically does. I don’t get the impression that these jobs sit in the public relations department.
Neville Hobson: No, I would say not, particularly as, for instance, one point that The Times made is that there’s a significant element of team building and so forth. So internal focus in organizations for this sort of role. it’s not just a public relations external function by any means. It’s interesting you mentioned the definition. I published a post on my blog this morning about that actually. looking at what the PRCA has done. It’s only one professional body. I’m thinking this isn’t going to fly unless everyone gets behind it. So that’s a different topic than what we’re talking about. it sort of fits in there because the role of… I just have a problem with this chief storyteller title, frankly, It doesn’t really fit what this role actually is. And I do believe, and you’ve partly prompted this kind of clarity in my thinking on this, that this is about messaging, it’s not about content production. That’s what AI does. And the interpretation of it, the meaning and significance of it is what the human does. Now, if you can, let’s say, present your skill as something in that area. to an organization who’s willing to pay $400,000. Again, be interested to see the job description behind that salary level. I haven’t seen that. I’ve I’ve not actually looked, I must admit. But it’d be interesting how they have described the role they’re willing to pay 400 grand for. So I would imagine they’re absolutely swamped with applications, which is where AI comes into play. AI comes into play well to sift out all the no-hopers, basically.
Neville Hobson: But it is interesting, it is very interesting. And this could be a great catalyst for the discussion about the role of a communicator in organizations in light of this development. That seems to me to be something good to have.
Shel Holtz: I can’t imagine somebody at OpenAI or Anthropic sifting through hundreds of resumes or probably thousands of resumes. they’re absolutely feeding them all to AI. I’d be shocked if not. And for the record, there are also some of these positions that don’t have storytelling in the title. I saw a couple that had narrative in the title instead. But I think they’re all getting to this notion of telling a powerful story that evokes emotions and pulls
Shel Holtz: audiences in rather than advertising or traditional marketing speak. That’s what’s going to cut through the, I get to say it again, slopaganda. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #501: AI and the Rise of the $400K Storyteller appeared first on FIR Podcast Network.
AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle.
What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, February 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson.
Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that.
Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches.
Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available.
Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think.
Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500?
Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics.
An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message.
What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Not just to protect the individual involved, but to preserve trust across the organization. Because once people see that this kind of harm can happen without consequences, psychological safety collapses.
What also struck me reading this, Shel, is that while it’s written for HR leaders, a lot of what determines the outcome doesn’t actually sit in policy or process. It sits in communication. In moments like this, people are watching very closely. They’re listening for what leaders say and just as importantly, what they don’t. Silence, careful wording, or reluctance to name harm can easily be read as uncertainty or worse, tolerance. That puts communicators right in the middle of this issue.
There are some things communicators can do before anything happens. First, help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity. Second, prepare leaders for that first conversation because tone and language matter long before any investigation starts. And third, reinforce shared expectations early. So when something does go wrong, people already know where the organization stands. This isn’t crisis response, it’s proactive preventative communication. In other words, this isn’t really a story about AI tools, it’s a story about trust — and how organizations communicate when that trust is tested.
Shel Holtz: I was fascinated by this. I saw the headline and I thought it was about something else altogether because I’ve seen this phrase, “workplace AI abuse,” before, but it was in the context of things like work slop and some other abuses of AI that generally are more focused on the degradation of the information and content that’s flowing around the organization. So when I saw what this was focused on, it really sent up red flags for me. I serve on the HR leadership team of the organization I work for. I’ll be sharing this article with that team this morning.
But I think there’s a lot to talk about here. First of all, I just loved how this article ended. The last line of it says, “AI has changed the mechanics of misconduct, but it hasn’t changed what employees need from their employer.” And I think that’s exactly right. From a crisis communication standpoint, framing it that way matters because it means we don’t have to reinvent values. We don’t have to reinvent principles. We just need to update the protocols we use to respond when something happens.
Neville Hobson: Yeah, I agree. And it’s a story that isn’t unique or new even — the role communicators can play in the sense of signaling the standards visibly, not just written down, but communicating them. And I think that’s the first thing that struck me from reading this. It is interesting — you’re quoting that ending. That struck me too.
The expectation level must be met. The part about not all of it sitting in process and so forth with HR, but with communication — absolutely true. Yet this isn’t a communication issue per se. This is an organizational issue where communication or the communicator works hand in glove with HR to manage this issue in a way that serves the interest of the organization and the employees. So making those standards visible and explaining what the rules are for this kind of thing — you would think it’s pretty common sense to most people, but is it not true that like many things in organizational life, something like this probably isn’t set down well in many organizations?
Shel Holtz: It’s probably not set down well from these kinds of situations before AI. Where I work, we go through an annual workplace harassment training because we are adamant that that’s not going to happen. It certainly doesn’t cover this stuff yet. I suspect it probably will. But yeah, you’re right. I think organizations generally out there — many of them don’t have explicit policies around harassment and what the response should be.
I think the most insidious part of how deepfakes are affecting all of this is that they flip the burden of proof. A victim has to prove that something didn’t happen, and in the court of workplace opinion, that’s really hard to do. It creates a different kind of reputational harm.
Neville Hobson: Yeah.
Shel Holtz: From traditional harassment, the kind we learn about in our training — you know, with he said, she said type situations — there’s a certain amount of ambiguity and people are trying to weigh what people said and look at their reputations and their credibility and make judgments based on limited information available. With deepfakes, there’s evidence. I mean, it’s fabricated, but it’s evidence. And some people seeing that before they hear it’s a deepfake just might believe it and side with the creator of that thing.
The article does make a really critical point though, and that’s that it’s rarely about one bad actor. The person who created this had a malicious intent, but people who share it, people who forward it along and comment on it and laugh about it — that spreads the harm and it makes the whole thing more complex and it creates complicity among the employees who are involved in this, even though they may think it’s innocent behavior that just mirrors what they do on public social media. And from a comms perspective, that means the crisis isn’t just about the perpetrator, right? It’s about organizational culture. If people are circulating this content, that tells you something about your workplace that needs to be addressed that’s bigger than that one individual case.
Neville Hobson: Yeah, I agree. Absolutely. And that’s one of the dynamics the article highlights that I found most interesting — about how harm spreads socially through sharing, commentary, laughter, or quiet disengagement. Communicators need to help prevent normalization — this is not acceptable, not normal. They’re often closest to these informal channels and cultural signals. That gives communicators a unique opportunity, the article points out.
For example, communicators can challenge the idea that no statement is the safest option when values are being tested. Help leaders understand that internal silence can legitimize behavior just as much as explicit approval and encourage timely, values-anchored communication that says, “this crosses a line,” even if the facts are still being established.
It is really difficult though. Separately, I’ve read examples where there’s a deepfake of a female employee that is highly inappropriate the way it presents her. And yet it is so realistic — incredibly realistic — that everyone believes it’s true. And the denials don’t make much difference. And that’s where I think another avenue that communicators, especially communicators, need to be involved in. HR certainly would be involved because that’s the relationship issue. But communicators need to help make the statements that this is not real, that it’s still being investigated, that we believe it’s not real. In other words, support the employee unless you’ve got evidence not to, or there’s some reason — legal perhaps — that you can’t say anything more. But challenge people who imply it’s genuine and carry that narrative forward with others in the organization.
So it’s difficult. It doesn’t mean you’ve got to broadcast a lot of details. It means going back to reinforcing those standards in the organization, repeating what they are before harmful behavior becomes part of, as the article mentions, organizational folklore. It’s a tricky, tricky road to walk down.
Shel Holtz: And it gets even trickier. There’s another layer of complexity to add to this for HR in particular. And that is an employee sharing one of these deepfakes on a personal text thread or on a personal account on a public social network — sharing it on Instagram, sharing it on Facebook — which might lead someone in the organization to say, “Well, that’s not a workplace issue. That’s something they did on their own private network.” But the deepfake involves a colleague at work, and we have to acknowledge that that becomes a workplace issue.
Neville Hobson: Yeah, it actually highlights, Shel, that therefore education is lacking if that takes place, I believe. So you’ve got to have already in place the policies that explicitly address the label “AI abuse.” It’s a workplace harm issue. It’s not a technical or a personal one. And it’s not acceptable nor permitted for this to happen in the workplace. And if it does, the perpetrators will be disciplined and face consequences because of this.
So that in itself though isn’t enough. It requires more proactive education to address it — like, for instance, informal communication groups to discuss the issue, not necessarily a particular example, and get everyone involved in discussing why it’s not a good thing. It may well surface opinions — again, depends on how trusted people feel or open they feel — on saying, “I disagree with this. I don’t think it is a workplace issue.” You get a dialogue going. But the company, the employer, in the form of the communicators, have the right people to take this forward, I think.
Shel Holtz: But here’s another communication issue that isn’t really addressed in the article, but I think communication needs to be involved. The article outlines a framework for addressing this. They say stabilize, which is support and safety; contain, which is stop the spread and investigate — and investigate broadly, not just the creator. I mean, who helped spread this thing around? Yeah, that’s pretty good crisis response advice.
But what strikes me is the fact that containment is mentioned almost as a technical IT issue when it’s really a communication challenge. Because how do you preserve evidence without further circulating harmful content? This requires clear protocols that everybody needs to understand. So communicators should be involved in helping to develop those protocols, but also making sure that they spread through the organization and are aligned with the values and become part of the culture.
Neville Hobson: Okay, so that kind of brings it round to that first thing I mentioned about what communicators can do before anything happens, and that’s to help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity and set out exactly what the organizational position is on something like this. That will probably mean updating what would be the equivalent of the employee handbook where these kinds of policies and procedures sit, so that no one’s got any doubt of where to find out information about this. And then proactive communication about it. I mean, yes, communicators have lots to address in today’s climate. This is just one other thing. I would argue this is actually quite critical. They need to address this because unaddressed, it’s easy to see where this would gather momentum.
Shel Holtz: Yeah. So based on the article, you’ve already shared some of your recommendations for communicators. I think that updating the harassment policies with explicit deepfake examples is important. This is the recommendation I’m going to be making where I work. I think managers need to be trained on that first-hour response protocol. Managers, I think, are pretty poorly trained on this type of thing. And generic e-learning isn’t going to take care of it. So I think there needs to be specific training, particularly out in the field or out on the factory floor, where this is, I think, a little more likely to happen among people who are at that level of the org. I don’t think you’re going to see much of this manager to manager or VP to VP. So I think it’s more front line where you’re likely to see this — where somebody gets upset at somebody else and does a deepfake.
So those managers need to be trained. I think you need to have those evidence-handling procedures established and IT completely on board. So that’s a role for communicators. Reviewing and strengthening the reporting routes — who gets told when something like this happens and how does it get elevated? And then what are the protocols for determining what to do about it? And include this scenario in your crisis response planning. It should be part of that larger package of crises that might emerge that you have identified as possible and make sure that this is one of them.
Yeah, this article really ought to be required reading for every HR professional, every organizational leader, every communication leader, because as we’ve been saying right now, I think most organizations aren’t prepared. What the article said is the technology has outpaced our policies, our training, and our cultural norms. We’re in a gap period where harm is happening and institutions are scrambling to catch up. Time to stop scrambling, time to just catch up, start doing this work.
Neville Hobson: Yeah, I would agree. I think the final comment I’d make is kind of the core message that comes out of this whole thing that summarizes all of this. And this is from the employee point of view, it seems to me. So accept that AI has changed how misconduct happens, not what employees need. Fine, we accept that. Employees need confidence that if they are targeted, the organization will do the following: take it seriously, act quickly to contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Those four things need to be in place, I believe.
Shel Holtz: Yeah. And what the consequences are — you always have to remind people that there are consequences for these things. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #500: When Harassment Policies Meet Deepfakes appeared first on FIR Podcast Network.
The Public Relations Society of America (PRSA) responded to member requests for a statement about the federal immigration crackdown in Minnesota with a letter explaining why the organization would remain silent. In this short midweek episode, Neville and Shel outline the key points in the letter, where they disagree, and how they might have responded.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, February 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson Hi everyone and welcome to For Immediate Release. This is episode 499. I’m Neville Hobson.
Shel Holtz And I’m Shel Holtz. At its core, this podcast is about organizational communication, which leads us to occasionally talk about the associations that aim to represent the profession. So today, let’s talk about PRSA (the Public Relations Society of America), which recently signaled a move to remain apolitical—retreating into a shell of neutrality when members were clamoring for them to speak up on controversial issues.
Specifically, I’m talking about the silence from PRSA regarding ICE (Immigration and Customs Enforcement) operations in Minneapolis. Now, before you roll your eyes and think this is just another partisan squabble, stop right there. This isn’t about immigration policy; it is about the integrity of public information—the very foundation of our profession. We’ll dive into what PRSA said and how I responded after this.
PRSA leadership, including Chair Heidi Harrell and CEO Matt Marcial, sent a message to members claiming that remaining apolitical protects the organization’s credibility. The letter framed this stance as a means to focus on its core mission. Leadership asserts that while they have commented on sensitive issues in the past, the current “complex environment” demands greater diligence, effectively reserving public advocacy only for matters that directly and significantly impact the technical practice of public relations or its ethical standards. By shifting the burden of advocacy to individual members and requiring chapters to vet local statements through national leadership, the society is attempting to build a “firewall against unintended risks.”
In other words, they’re betting that professional neutrality is the best way to maintain trust across a diverse membership, even if it means stepping back from the broader social fray. Now, I have a different perspective. In fact, I’ve published an open letter to PRSA leadership on LinkedIn, arguing that their own Code of Ethics doesn’t just permit them to speak out—it actually demands it.
Consider the “Free Flow of Information” provision in the PRSA Code of Ethics. It states that protecting the flow of accurate and truthful information is essential for a democratic society. In Minneapolis, we have federal officials making public statements about the killings of U.S. citizens—statements that are being credibly disputed by video evidence and eyewitness accounts. When government officials systematically misrepresent facts, that is a professional standards issue. It is not political to distinguish a truth from a lie. It is, quite literally, our job.
PRSA argues that they want to maintain trust across a diverse membership, but let’s be clear: silence is a statement. It’s a message that says our ethical commitments are only applicable when there’s nothing controversial to address. Don’t believe for a minute that neutrality will save your reputation. Silence in the face of documented misinformation erodes trust among the very members who look to the Society to model the courage we’re expected to show our clients every day.
The PRSA Ethics Code mandates a dual obligation: loyalty to clients and service to the public interest. It doesn’t say “serve the public interest only when it’s convenient or not controversial.” When federal agents are accused of violating nearly a hundred court orders and detaining citizens unlawfully, truth in the public interest is eroding fast under the weight of official silence. If PRSA won’t defend the standard of truth when it’s being trampled by powerful federal agencies, who will?
I am not suggesting that PRSA needs to become an immigration advocacy group—I am decidedly not. But I am suggesting a path forward that reaffirms our values without wading into the partisan muck. PRSA could and should issue a statement that affirms the vital importance of truthful government communication. They should issue a call for transparency when official narratives conflict with documented evidence, and they should reaffirm that all communicators have an obligation to accuracy over mere advocacy.
The fact is, our profession depends on a broader democratic society that functions on truthful information. When that foundation is threatened, our standards are implicated, whether we choose to acknowledge it or not. And let’s keep in mind, PRSA has members working in federal agencies that may require them to participate in the distribution of false information. Professional associations aren’t tested during the easy times. They’re tested when standing up for a principle actually costs something. PRSA’s current diligence looks a lot like retreat. We should be leading the charge for accountability, not languishing in a state of denial.
The comments to the LinkedIn article I posted show a membership that is anything but neutral on the need for ethical leadership. I’ll make one more point here: this approach to determining when advocacy is required translates nicely to businesses that have retreated from taking stances on societal issues, despite the Edelman Trust Barometer’s continued demonstration that it’s an expectation of their shareholders.
Neville Hobson It’s an interesting one, Shel. I’m reminded of discussions we have had on this podcast previously about the role of businesses to take a stand on issues that are societal but demand some kind of response in some form. This fits that, I think. Your response on LinkedIn was very good; the path forward you outlined is strong.
I did like it when you mentioned the word “courage.” This demands that in the face of fear or apprehension. All those words could apply to the potential minefield PRSA would be wandering into if they stepped away from being “apolitical.” Could there be a response from those federal agencies themselves? Or perhaps a negative reaction from the administration and the White House? That may be a driver behind it. Yet, this sort of situation has arisen before. We’ve talked about the notion of professional bodies taking stands on issues.
The way you’ve framed the issue as ethical and professional—it’s hard to argue against that. This is not a partisan thing. I see you’ve got over 120 comments on LinkedIn to your article. Did you hear anything from PRSA directly, or are they silent?
Shel Holtz No. In fact, a few people who have had issues with PRSA in the past told me they appreciate me posting an open letter because PRSA has historically ignored those. I’m not necessarily expecting to hear anything from them. I don’t hold any leadership roles there, so there’s no reason they should think I’m someone special to reach out to.
But you talk about professional organizations; related to all of this, we recently had the arrest of two journalists reporting on an activist group that interrupted a church service led by a pastor who also has a role with ICE in Minneapolis. It was arguably an illegal action for this group to do that, but two reporters went in with them to cover it and they were both arrested based on an order from the U.S. Attorney General.
The associations that represent journalists were pretty quick with their statements. PRSA talked about making a statement when there is something that is “technically related to the profession.” That would certainly apply in the case of these journalists. But still, the journalism associations were quick, and there was no concern that members might take issue or that the administration might make life miserable for them. They had the courage to take a stance consistent with their codes of ethics.
One member of the PRSA board, whom I know personally, did leave a comment questioning why I singled out PRSA. Why not the Page Society? Why not the IABC (International Association of Business Communicators)? My answer was: they didn’t send me a letter telling me why they’re not saying anything. But I absolutely think every communication association should be advocating for truth in public communications. That’s our job.
Neville Hobson I think the fear of a strong, negative, almost threatening reaction from the administration and the White House is at the heart of this. They have “form” in ignoring ethics or international agreements—they’ll tear up those bits of paper because they say it’s “fake” or “rubbish.” Maybe that’s behind a lot of this.
What you’ve given them is a challenge: will PRSA apply its own ethical framework when doing so carries reputational and political costs? You mentioned others saying PRSA has a history of ignoring public letters. You see this with other professional bodies who are reluctant to take stands, interpreting “taking a stand” as “advocating for a cause,” which they don’t do. I would argue this is splitting hairs because the argument is about upholding standards.
Enlisting support from other professional bodies might be the safest approach—not asking them to take a stand on a specific political issue, but to reassert the point of truthful communication, transparency, and professional accountability. Someone has to do something to address this. This is an opportunity. I understand the reluctance, but I would counter by saying you need to have courage. You represent communicators across the United States, probably Canada, and elsewhere. Who else will do this if you don’t? What would you like to see happen as a result of this discussion?
Shel Holtz I would like to see the professional associations have a conversation on their staffs and their volunteer boards and decide how they’re going to proceed in a way that conforms with the values they purport to espouse.
I understand that PRSA issued the letter because they had been flooded with member requests to do something. A week or so ago, a letter was issued through the Minnesota Chamber of Commerce by 60 CEOs of Minnesota-based businesses—companies like 3M, Target, and UnitedHealth Group. Some people praised it, but most thought it was weak and “milquetoast.” It called for de-escalation but never named ICE or the immigration issue at all.
In the meantime, Target is still under pressure from customers and employees to say something. There was an arrest made by ICE inside one of their stores that traumatized employees who witnessed it, and the company has said nothing. It’s similar to Home Depot, which has had arrests in its parking lots and has remained silent. This disturbs stakeholders. You don’t need a position on immigration policy to talk about tactics that are affecting your community and your business. That’s fair game. That’s where the framework for a statement has to be focused: what is the impact on your business and where does this align with your values?
Neville Hobson It’s interesting. 3M and Cargill are global businesses. That “milquetoast” route was probably the safest way to navigate the tightrope, but it doesn’t really help much other than attracting criticism for being weak. I can equally understand that no one there wants to point fingers in a way that might not advance the discussion, but it doesn’t lead us anywhere.
Shel Holtz Well, look at organizations like Patagonia, which has actually sued the administration, and their sales and profits are doing just fine. There may be a lot of fear that isn’t backed up by substantial consequences. If you look at the streets of Minneapolis these days, you can see where public sentiment is. It’s fine for business to be on the right side of this.
Neville Hobson It is a very tricky situation. Every one of these companies has a statement defining their values, and surely what we’re seeing on the streets of Minneapolis would offend those values. No one’s willing to be counted. Maybe it needs a “safer” avenue—redefining or restating values in public and linking them to these events without naming names. But currently, we only have what we see on the news, and it’s not pretty.
Shel Holtz No, it’s not. The businesses that have a direct connection to this and remain silent are going to be remembered for it. This doesn’t mean every business needs to make a statement—if you’re not based in Minnesota, perhaps it’s unrelated to your standards for public comment.
But going back to PRSA: when you have federal officials making false statements to the public, and you have an organization that advocates for ethical communication, I think that demands a position. That is the framework for businesses and associations to look at: where is the alignment that should lead you to stand up and display that kind of courage?
Neville Hobson Where indeed?
Shel Holtz That’ll be a 30 for this episode of For Immediate Release.
The post FIR #499: When Saying Nothing Sends the Wrong Message appeared first on FIR Podcast Network.
In this FIR Interview, Neville Hobson and Shel Holtz speak with crisis and risk communication specialist Philippe Borremans about his new Crisis Communication 2026 Trend Report, based on a survey of senior crisis and communication leaders.
The conversation explores how crisis communication is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk – and why many organisations remain dangerously underprepared despite growing awareness of these threats.
Drawing on real-world examples, including recent AI-amplified reputation crises, Philippe outlines where organisations are falling short and what communicators can do now to close the gap between awareness and action.

Philippe Borremans is a leading authority on AI-driven crisis, risk, and emergency communication with over 25 years of experience spanning 30+ countries. As the author of Mastering Crisis Communication with ChatGPT: A Practical Guide, he bridges the critical gap between emerging technologies and high-stakes communication management.
A trusted advisor to global organisations including the World Health Organisation, the European Council, and multinational corporations, Philippe brings deep expertise in public health emergencies, corporate crisis communication, and AI-enhanced communication strategies.
He is the creator of the Universal Adaptive Crisis Communication framework (UACC), designed to manage complex, overlapping crises. He publishes Wag The Dog, a weekly newsletter tracking industry innovations and trends.
Follow Philippe on LinkedIn: https://www.linkedin.com/in/philippeborremans/
Relevant links
https://www.riskcomms.com/
https://www.wagthedog.io/
https://www.riskcomms.com/f/the-2026-crisis-emergency-and-risk-communication-trends-report
Shel Holtz
Hi everybody and welcome to a For Immediate Release interview. I’m Shel Holtz.
Neville Hobson
I’m Neville Hobson.
Shel Holtz
And we are here today with Philippe Borremans. We have known Philippe for at least 20 years, going back to the days where he was managing blogging at IBM out of Brussels, located today in Portugal. And an independent consultant addressing crisis, risk, and emergency communications. Welcome, Philippe. Delighted to have you with us.
Philippe Borremans
No, thanks for having me and it’s good to see you both.
Shel Holtz
And before we jump into our questions, could you tell listeners a little bit about yourself, a little more background than I just offered up?
Philippe Borremans
Sure. Yeah, as you said, I mean, I started out in PR with Porter Novelli in Brussels, that’s ages ago, and then moved in-house at IBM for 10 years. So that was from 99 to, I think 2009, must be, working on, as you said, the first blogging guidelines, which then became the social media guidelines. It was a great project, I was responsible for all external comms there. And then…
In fact, moved away from Belgium, lived four years in Morocco, working in public relations on a more, a bit more strategic level. And since then I’ve been specializing in risk, crisis and emergency comms. So that’s actually the only thing I do. It’s mainly around all the things that could happen to either a private sector organization, a government or a public organization.
Shel Holtz
And you also produce and distribute a terrific newsletter on all of this. So we’ll ask you later to let people know how to subscribe to that. We thought we would start with a case study, although we are going to get into a survey that you recently wrapped up and released. there was an incident in which an executive at Campbell’s, the company that makes Campbell’s soup, claimed that the company’s products were highly processed food for poor people and that the company used bioengineered meat. He also made some derogatory remarks about employees and this surfaced and spread around. An analysis found that negative sentiment around the company surged to 70 % and page one search results were flooded with these negative narratives. And that included the AI overviews. One analysis said the ears of marketing and branding were wiped away in an instant.
And that same analysis said that one of the biggest risks that AI introduces is an inherent bias toward negative information. What happened with Campbell’s is that coverage spread really fast across social media and traditional news outlets when this email surfaced. That created a flood of new content that AI systems were happy to start ingesting ⁓ and reinforcing. So when people started searching for 3D printed meat and questions about whether Campbell’s uses real meat, AI didn’t correct those perceptions. It surfaced fragments of context. It pulled language from the company’s own website that referenced mechanically separated chicken. I don’t want to know what that means. And all of this muddied perceptions instead of clarifying things. What should communicators be doing? What didn’t Campbell’s do to protect itself from this? It really is a new reality about how information is gathered up and then shared back out?
Philippe Borremans
It makes you wonder sometimes but it does tell me that the organization probably was not investing enough in their online reputation side of things. I mean, I recently had discussion with a client, they were asking me about how do we prepare our online information so that it surfaces on AI searches and all these things. And I said, well, maybe you should already start by in your newsroom, not publish your press releases in PDF format because that is so the basics most of the time are not in place. And I think in this case, again, I mean, looking at how search engine optimization is changing, how AI is looking at information. That is crucial. It’s basics because if your online reputation is out there with the information that you have, the bias of AI, I can get it. But if you know that, again, you can work with that. And so I think the organization was simply not looking at their online side of the reputation and information dissemination.
Neville Hobson
What do you think, Philippe – it’s intriguing when I read this story originally that an organization as storied as Campbell’s Soup, one of the leading FMCG companies, with experience galore in communication, made errors such as and highlighted in this report. And it also highlights, I think, the speed with which this evolved and spread so rapidly, caught everyone by surprise. Is this a one-off, do you think? I mean, surely companies can’t be so unprepared as Campbell seemed to be with some kind of system in place, procedures, et cetera, or even going further back than that, the notion that an executive would say such things even. What’s your thought on that in terms of, of literally the self-inflicted damage they have heaped upon themselves?
Philippe Borremans
But I think in many cases, if we would take all the crisis communication cases that are publicly available, you would see that that is a trend that, you know, when people talk about crisis communication, they often think about the things that happen from the outside, right? The things that are sudden. But that is not the case. If you look at crisis communication history, the biggest proportion of crises are not sudden. There are smoldering crises that then break out. So that means that it’s first an issue that you can still manage, but that you for one or the other reason don’t look at and then it becomes a crisis. So it’s not sudden. You knew it was there at some point. And then the other thing as well, we think it’s external factors. But again, the majority of crises have an origin from within the organization, at least in the private sector. So what it tells me, first of all, it’s not new. It’s again, the old story of internal happening and it was an issue probably first and it came out and it was badly managed. And that shows to me or that tells to me that again, crisis, preparedness, reputation management with the big word is still not ingrained in on that top level executive level in the private sector.
Shel Holtz
Philippe, you’ve released a survey and Neville and I have been looking at it. We have questions, but can you give us an overview of the survey before we jump into our questions?
Philippe Borremans
Sure, yeah. So at the end of last year, I did a survey through my contacts, network, newsletter readers on crisis communications, a bit of looking at what the trends would be. Of course, AI is in there, but other things as well. And I got 102 responses that I can actually use. So was amazed. I was like, okay, this is something at least that shows some direction and I’ll just take my notes. Now one of the things that was interesting to see is that when we talk about AI for instance, one out of 87 people reported full AI integration. So that goes in line with other surveys that we see where, yes, there’s a lot of talk about AI in comms and the big changes it can bring and what have you, but we actually see a very, very small amount of implementation, structural implementation of AI. Most of us communicators are still playing around and discovering AI, and this was confirmed as well. Now, the respondents here are senior-level crisis slash communication director types. The adoption levels are low. The top barrier is very interesting. So I asked about the top barrier. So why then is AI not integrated in? And it’s 23.5 % set skills, huge skills gap. And again, that is in correlation with other surveys that I could see. Budget, okay, and then privacy security reasons at 14.7%. But the skills gap, that is the one that I’m really worried about because AI is not new. It’s been three years that we had access to the GenAI tools. We know we can install open source models on our own machines. We can sandbox them in an enterprise environment and still skills and the actual application are very, very low.
So that was for AI. Another one, which I was really afraid of and unfortunately confirmed is exercising. Do organizations actually exercise their plans? They all have a plan somewhere, but we know it’s just a plan and it’s the first thing that goes out of the window when something really happens. But do we exercise? Do we do crisis simulations, tabletops, large scale simulations and only 26.5 % of the respondents here test at least annually? 9.8 reported they never tested and then you’ve got the whole middle who test from time to time when they feel like it probably. Public sector was a bit different than private sector but still that is worrying because I know from experience having worked in this field now for the last 15 years
Good crisis communication or risk communication or emergency communication is about… It’s a muscle, right? If you don’t exercise it, whatever your plan is, it will not work. You need much more an agile approach, which comes from training and simulation exercises, than a rigid protocol plan. You need a plan, I’m not saying you don’t need it, but what will get you through a crisis is your agile approach because things change all the time. And that is only possible to get there, it’s only possible through exercising and we see that it’s not the case. Another one linked to AI. Everybody in the survey said, and it was really on top of when I asked about the biggest risks, AI, going wrong, AI risks related to AI. So fakes and what have you, deep fakes, etc. But only 3.9 % said that they had a tested gen AI crisis protocol. And 27.5 said they had no protocol and no plans in place to face an AI generated crisis. So it’s right on top there. Everybody’s afraid of it. Nobody’s planning for it. Again, an interesting insight I found.
Neville Hobson
That is interesting. Yeah.
Philippe Borremans
And then the first thing, mean, said that trust was much more difficult to manage than before. But what I saw in the rest of the information of the survey as well is that, again, the problem is there. what we actually and when I say we, it’s communicators and crisis communicators, what we don’t do is prepare, train and create protocols for different scenarios.
Neville Hobson
On that topic of trust, timely mention there, Philippe, because that’s one of my questions I was going to come back to a bit later, but this is the right moment to talk about that. The report actually describes a widening trust deficit. You touch on that with many organizations struggling to measure trust at all. That surprises me, I have to say, let alone rebuild it during a crisis. In fact, that applies to, I think, the Campbell Soup situation quite well, and it’s a crisis of trust they have now encountered. It’s timely to talk about this because in the context of the bigger picture of trust, Edelman’s Trust Barometer, which landed today as we record this, which is the 20th of January, it raises that question of the widening trust deficit in the context of crisis communication. So, I wonder your thoughts on the perennial question about communicators wanting to be taken seriously in the boardroom in particular. How should they rethink trust? And indeed, is that the right question to ask them even in this current climate of widening trust? Trust is already low. How on earth do you lift yourself up from that? How should they rethink it, as I mentioned, not as a value, but as something they can actually measure and manage? What’s your take on it?
Philippe Borremans
Well, I’ll even go a step further and I like your question. Is it even one of those concepts that we need to look at? I have a big issue with I do a lot of speaking at conferences and do workshops and every single time at least at conferences, every single other speaker has one slide that says we need to build trust. Right. And I got so fed up with that. I mean, what is trust, Neville and Shel? All three of us have different cultural backgrounds. Trust, the concept, is a different thing for all three of us. How we relate to government, how we relate to the private sector, how we relate to our community in society, it’s different. There is not one single definition. Of course, there is the broad definition of what it means, but when you look at it from a communications point of view and a relationship point of view between an organization and the publics, you will see that in every different part of the globe, it’s a different interpretation. And trust is not only, is not the only variable that works or that is important for crisis comms, then at least we have around 12 of them. Peter Sandman, you know, put the groundwork in that work, scientific research, but we have 12 to work with. Trust is just one of them.
So already there, I’m very cautious about using trust as the…you know, the mantra or the silver bullet. But once we understand what we’re talking about and agree on it, to me, it’s very simple. It all starts with and ends with completely and completely understanding your different audiences. We always talk about stakeholders. Sure, they are important. But from a communications point of view, from trust building, and I think
At least that’s my analysis from the Edelman Trust Barometer report as well. They talk about segmented audiences finally, we, I hope now finally most communication professionals understand that the general public doesn’t exist. We need to segment our audiences. And it’s understanding those through and through. Knowing what their context is. Knowing what their definition of trust is, what their relation is with your organization. Only then can you start building plans looking at how you would approach this in the context of a crisis. That’s what I think about this.
Shel Holtz
I want to stick with this issue of trust, even though it’s just one of several variables. Your survey found that nearly 66 percent of practitioners find building trust is harder today than it was five years ago. And you reference the idea of this being the era of the perma crisis. It’s always happening. Is this decline in our ability to build trust to failure of communication or is the external environment just too volatile to to manage effectively?
Philippe Borremans
But as an organization, as communicators, we’ve always worked in an environment that was shifting. Sure, maybe it’s, you, we’re in a peak moment where a lot of things are shifting. But if you actually look at, if I just look at different moments in my career at IBM, et cetera, and other organizations, there were always things shifting around. Now, either you look at it from your micro environment where you actually have something that you can
manage or on a global scale. But I think it’s much more about the profession as communicators. First of all, understanding the environment. Not a lot of communicators truly understand polycrisis and permacrisis concepts and how it actually translates into communications. It’s thrown out there and geopolitics and what have you, but how does that translate to your day-to-day work for your organization? So that’s already, I think, a gap. And then once you understand that, what can you actually do to minimize that impact from a communications point of view? We only have so much that we can actually work on. That means we need to work with other departments as well and probably with industry associations, cetera, et cetera. We are not the, you know, we cannot solve everything. But if we actually already start knowing what we can do in our corner and understanding the global environment now, which is not easy. Then already we can take the first steps. I’m always amazed when I work with clients, they all have media and social media monitoring platforms. And they actually think that for them, that’s intel, that’s the insights they need. Most of the time I tell them, well, yes, you need that part, but you have nothing around predictive analytics. You have nothing on horizon scanning. You have nothing on. So there’s huge gaps in there. And that’s actually the new things that you need in a world which is changing all the time.
Shel Holtz
I remember reading in an IABC document, somebody said that a crisis is what you get when you fail at issues identification.
Philippe Borremans
It is an issue, badly managed issue is of course something that becomes a crisis. on trust, Shel, I think out of the report came that the majority of respondents find it much more difficult to manage trusts than five years ago. But when I asked, well, how do you actually measure that? Nobody knew. there, again, it’s an impression they have. It’s a feeling.
Shel Holtz
It’s a feeling.
Philippe Borremans
But where is your benchmark? How are you going to measure your impact that you have or don’t have? How do you work with that if you don’t have the data? And that’s a gap.
Neville Hobson
You mentioned ‘polycrisis’ and indeed your report starts out by saying we work in an era of polycrisis. And you then said communicators need to understand what that means. Well, I’m a communicator. Help me out here, Philippe. What does it mean?
Philippe Borremans
Well, a polycrisis is an interesting concept. So what it actually means is that you have different crises which are interlocked, right? And that can happen in the same crisis window, meaning you could face a climate hazard, let’s say a hurricane, which could result in a blackout, which means, you know, critical infrastructure which then could have an impact on your data center and you suddenly are in a very commercial crisis there because clients rely on your data center if you’re an infrastructure provider. So it’s that interconnectedness of different types of crisis. And that is an interesting concept. First of all, it’s closer to reality. I’ve seen it here in Portugal. We had our famous blackout for more than 12 hours, but you see how it trickles down and impacts different things.
Neville Hobson
Yeah.
Philippe Borremans
infrastructure, mobile connection, business, etc. etc. etc. So that idea of interconnected crisis is now it is interesting in the context of crisis communication because we have previously always been trained on siloed crisis. All the plans are written like, okay, if we have a product recall, what do we do? If we have a critical infrastructure breakdown, what do we do? If we but it’s all separated, it’s not integrated. And of course that changes the game, that changes how you prepare for a crisis.
Neville Hobson
So that leads into, nicely, the question I had, which is precisely on that point, one of the strongest themes in the report is that crisis communication works best when it’s integrated across functions. Yet, HR, legal, maybe cybersecurity, certainly comms are often only loosely connected. So when a real compound crisis hits, where do you most often see integration break down? And what distinguishes organizations that get this right from those that don’t?
Philippe Borremans
Yeah. Well, a good example was Heathrow. You know, remember the blackout of Heathrow? It was so crazy because at one point I put two screens next to each other. So through my network of crisis communicators, we were all going like, my God, how is this possible? You know, I mean, but I had actually next to that a screen with my feed of my connections in the business continuity world, the operational side. And they’re all going like, we got an airport, one of the biggest airports in the world up and running in less than 24 hours again. Job well done. So that’s where it happens in an organization. You have the comms people going crazy and you’ve got the ops people working very hard and doing what they do. But they don’t you know, there is no interconnectedness. And then, of course, you have legal good friends from legal. And now you have entities, of course, HR. mean, one of the things that I’m still very much amazed when I work with clients is that internal comms is never at the table. While we all know that your first communication during a crisis is your internal communications. And it’s still not the case. So that where it’s often go wrong. The big chasm that I see is between comms and operations. once they get together, you see fabulous things happening because we can translate what it actually means if they can be up and running in half an hour or in two days. We can translate that to our audiences and our stakeholders and say, look, that’s the situation.
Cybersecurity is interesting as well because there’s a lot of pressure to integrate that now into crisis management teams simply and not because people think that’s the best way to do it, because it’s becoming the law within the EU. It needs to be integrated. It’s the law. You have no choice. So there’s a couple of things moving, but it’s more on the pressure of law and ISO quality norms and what have you, than actually understanding, yes, we all need to sit around the same table and let us all do our own jobs that we’re good in. We can translate stuff. You do the operational stuff.
Shel Holtz
It’s interesting, our CEO, I work for a construction company and our CEO says the thing that keeps him awake at night the most is cybersecurity and nothing to do with the industry. It’s just cybersecurity issues. Philippe, one of the new insights that came out of your report was a reference to populist politicians undermining science-based policy. How can organizational communicators deal with this landscape where facts are increasingly viewed through a partisan or an ideological lens?
Philippe Borremans
Well, again, it goes back to understanding why and how this happens. If you look just around a topic, which again is often discussed, mis-dis and malinformation, we talk about online and offline, right? It’s understanding what it actually means. I’m running a couple of workshops now on specifically on inoculation and pre-banking, which are two techniques and probably the only two techniques that work to counter this mis, mal and disinformation online. And so it’s, it’s understanding the psychology behind it. It’s not only about technology, it’s a lot about human biases and psychology. And, of course, countering the world’s geopolitical narratives, which, you know, have a certain way of going, that is, of course, very difficult, but understanding why they happen and how they work and how they then can impact certain audiences which and stakeholders were important to you. I think that is crucial as a communicator. And that is by studying, just looking at, what does this actually mean? Can we identify it? Can we translate how it could potentially impact what we do? And then how can we counter it?
Unfortunately, if it’s about online mis-dis and malinformation, there’s only two techniques that work. And even then, those two alone will just create a small protective layer because it’s very difficult to take online. But pre-banking and inoculation are the only techniques that work for the moment. Other ones, is, and they’re being talked about like, let’s increase media literacy. Well, that’s first of all, up to up. I mean, it’s not our responsibility. I think as communicators, we have other things to do. It’s probably the responsibility of the government, institution, Ministry of Education, but then we’re off for the next three decades.
Neville Hobson
I want to go back to the gaps that we touched on earlier. The big gap that struck me from reading it is how so many leaders see AI-driven misinformation and deepfakes as critical risk. Yet most organizations still don’t have the documented protocols to deal with them. And you’ve made that point very strongly about no protocols, no plans. I wonder, what’s really holding back organizations from moving from beyond awareness, that like, yes, they know, to action. So I guess a simple question, like the takeaway for listeners in this one, if you’re a communicator, what’s the simplest first step you could take to move from awareness to action to develop a plan? Let’s say in the next 90 days, what would you say to someone with that?
Philippe Borremans
It starts with sensing, right? You have to listen for these things because otherwise you’ll just see them when they’re actually out there and you’re in trouble. So it’s actually sensing. So I’m a very strong believer in AI driven predictive analytics. So this is different than your standard monitoring. Your standard monitoring, look at brand mentions and CEO mentions, executive mentions, et cetera. That’s not how you’re going to detect deep fakes. There are actually platforms out there today which do predictive analytics look at the activation of bot networks, the spreading of a certain narrative in a certain context, and that will show you something is brewing.
I’m making it very simple now. Something is brewing, things are getting organized, we could have something coming towards us, which could be deep fake and what have you and what have you. So first listening so that you have your alert system done in place. Then on the defense side, it’s actually also having what I call a truth bank. That’s a database or an Excel sheet, whatever. I can’t believe I said Excel sheet, a database where you have actual proof that your communication assets are yours, authentic and come from you. Because we are getting into an area where at one point in time, will, an organization will be questioned. Yes, you can say that press release is yours, but is it actually yours? You can say that that video of your CEO is actually true, but how can you prove that? We call me in an area as far as that. So you actually need to do it.
And you know, I’m a big defender and also user of blockchain technology. It’s very simple today. You can actually, you know, actually prove without irrefutable doubt that some pieces of communication are yours. Example of a bank in Belgium. Already years, every single press release they send out is stamped through a blockchain system so that they can actually prove it’s theirs. And they started to do that more than five years ago because they had fake press releases going out. And that wasn’t even AI driven. That was just someone who got very creative.
So first listening, then protecting your assets, making sure that you can prove it yours, and then countering. But countering depends on the situation. If it’s a rage farming attack, for instance, it’s no use in going against the originators, the people, the bad actors. That’s no use at all. You need to focus on the…
Neville Hobson
Can you just explain what rage farming is, Philippe?
Philippe Borremans
Sorry, yeah, rage farming. Rage farming is a technique where a bad actor, and most of the time it’s about making money, organizes an attack on your brand and they make money simply by the algorithms on the different platforms who then bring in sponsors and what have you and and clicks etc. Rage farming is an attack which is actually taking your normal communication standard comms, your next press release, your next presence at a conference, your next speech of your CEO, takes it out of context, looks at how it can be repurposed with one single objective to trigger rage.
So a very practical example, imagine that a retail company decides to make unisex uniforms. Men and women dress the same. We don’t make a difference. You could think, wow, gay, why not? Taken out of context, that means that it could be translated by bad actors in, look, they don’t want women to be women anymore. look, the whole woke context, they would reframe that and then target that message. It’s just out of context, but target that message proactively to communities online who are much more conservative, who have a much more conservative worldview. They would then be triggered by rage, start to spread it, and then actually you have that whole system. That’s rage farming. And why did we come to rage farming? Lost my…
Neville Hobson
Yeah. You were trained to thought. Yeah. No, that was, no, we actually moved on from that question, which was these steps to take, and you were going through each of the steps…
Philippe Borremans
Yeah, so and against rage farming, is one of those things that you need to do. So is that actual listening and in the context of rage farming, it’s no use at all to go against the bad actors because they’re in there to make money. Most of the time they have a whole network. It’s no use at all. But you need to then focus on your audiences that you can at least still inform. So not even to the in this example, the more conservative community online, because you will not change their mind. That’s their cultural background. That’s how they think about the world. So you actually need to know very well where you can make a difference or not, which is not always easy.
Shel Holtz
Let’s stick with this theme of gaps. The respondents to your survey were mostly C-suite, director level professionals. Is there a generational gap in how senior leadership views risks compared to the lower level, more junior practitioners? They’re the ones who are monitoring the feeds and they’re the ones who are going to be tasked with implementing a response. Is there a gap between them and the senior leadership in terms of how they perceive these things, these issues?
Philippe Borremans
I didn’t, I couldn’t get that out of the survey. I could probably look at it more deeply, but my gut feeling and based on experience is that you have some senior leaders who definitely see these risks, but on a very strategic level. And there is a gap in translating that into an, shall we call it an operational level. That’s what other responses and other questions tell me. Like, we know it’s difficult to manage trust compared to five years ago. Yeah, but you don’t have the benchmark. So how would you know? It’s just a gut feeling that you have. we know AI generated crisis are top risk. Yeah, but you don’t have your pro. So it’s that translation, I think. So there’s a top layer, think, that actually reads all the reports and meets around with senior peers and they talk about geopolitics and the world changing and polycrisis, what have you, and they understand.
But then how do you translate that into actual practical things in operational stuff? How do you upskill your team, your communications team today? Right. So that they can actually face all these these new issues. How do you change and adapt your crisis communication, preparedness planning? How do you integrate that? Those are the kind of practical questions that probably don’t trickle down. And of course, if down there you have more junior people, they maybe wouldn’t know the best way to go about it. That’s my feeling.
Neville Hobson
I’d like to talk a bit about testing. You can have a crisis communication plan, indeed more than one, which is not much use if you don’t ever test it to see if it all works. Anecdotally, I’ve heard, over the years, I suspect that that’s a major hurdle for many who perceive it as, know, organization wide. This is a massive project to get through. And yet I’ve often wondered, do you have to do that even? And then your report talks about things like embracing micro simulations, which I think maybe you could talk about that a little bit. But I’m also thinking something I found quite intriguing, make testing a governance requirement. And I suppose that makes sense in jurisdictions where testing isn’t any kind of legal requirement. So you voluntarily do this. But can you talk a little bit about the embracing micro simulations in particular and maybe some examples of how to make it seem both less daunting to a communicator and also relatively easy that they can actually implement some kind of testing process.
Philippe Borremans
Yeah, and that’s, that’s one of the things that comes back when I talk to people, like not specializing, so communications colleagues, I recently was at a conference and someone said, I am so convinced of this, but how do I translate that and tell that and ask for this to my management, because they see only the costs. Now, I actually have a little AI assistant that I trained into calculating return on investment of these things. But people think simulation is this big thing, right? You see ambulances coming and you see probably a big war room with big screens and what have you. It doesn’t have to be every single time that kind of simulation exercise. Organizations can start from the minimum, which is micro simulations.
I have a a small micro simulation platform that I coded myself. I do workshops with that. It’s an half an hour exercise. It’s a lunch and learn time, right? Get people around the table with a sandwich and say, okay, what is the crisis that we’re going to role play today? Half an hour, you get feedback. Fine. You can do that every single week. People find it fun, but it trains the muscle because it’s based on real scenarios and it’s real feedback and etc. Tabletop exercises. You have many different forms and formats. They can range from one hour to three hours. They can be functional exercises. They can be completely invented exercises. And let’s not forget, mean, communications people have no experience at all.
They’re actually simulation kits you can pay for and they’re not expensive and download, read through the manual and go through the motions. That also trains you maybe as a non-specialized communicator on what it actually means to manage and to do good simulations. But the most important thing is it doesn’t have to be the big thing. You can do micro simulations on a very regular basis, make it fun. You can do tabletop exercises every quarter, hopefully with an executive team, but put it in the agenda. And if you are in certain industries, I would actually say, well, you need a full scale simulation exercise every year if you’re in the petrochemical and what have you industry.
The point is you can actually position this not as a cost center exactly as a corporate insurance does. We know based on research and facts that organizations who train their plan first of all get through a crisis much quicker but rebuild after a crisis much quicker and that’s where the money goes. If it takes you two years to rebuild that’s a lot of money. If you can shorten that by half or even more. That is the actual game you do. And that comes from training, training and training. There is a reason why I was in the Navy. There’s a reason why the the captain of the ship, you know, did fire exercises every single day. And after the, you know, the 52nd, you go like, why are we doing this stupid thing? But actually, when you have a fire, you know why.
Shel Holtz
Ten percent of organizations never test their plans at all, according to your report. What happens to these organizations when they’re confronted with a Black Swan event? mean, can you wing it these days?
Philippe Borremans
It’s a good question. Can you wing it? Some organizations wing it and then suddenly they go through it and are like, wow, how did you… But that’s more luck than anything else, I think. Now, black swan incidents, of course, are interesting because those are the ones you cannot prepare for. Well, you cannot plan for, you can prepare for. Because if you build actually that agile muscle around crisis management and crisis comms. You are already much better prepared than somebody who doesn’t have that agile muscle, who is strictly following protocol and old school plans and then suddenly is confronted with a black swan incident.
And that’s why I’m a strong believer in working much more, again, you need plans, you need protocols, fully agree, but you actually need an agile communications team. We know things go very fast, they come from every single corner. You need that mindset. You need that agility muscle in there. And then teams are actually ready to take what comes and move at the moment.
And I do see a link. Another thing which is very difficult for communicators from my generation because we were trained like that, at least I was at PR school, I was trained that you do not communicate until you have all the facts. It took me a couple of years to switch that. When I work for the UN agencies during the pandemic and other epidemics, you actually need to communicate without having all the facts. And it’s very uncomfortable. It’s very contra training. But that’s what everybody in communications today should have that skill, because most of the time you will not have all the facts, and the facts will change day after day after day after day. So you need that muscle again, that agility. That’s the most important thing I think today.
Neville Hobson
I would agree with that view, Philippe.
Shel Holtz
The last question I have before we get to our traditional final question. You got a PR manager in a company who wakes up tomorrow morning, finds your report, reads it and realizes they’re part of the 77 % with no AI protocol. What should they do? What are the first steps they should take to update their crisis plan?
Philippe Borremans
I think if they don’t have a protocol now, it means that it hasn’t been on the agenda or not on the radar. They’ve heard a couple of things. So first of all, get informed about what it actually means. What is a deep fake? What are the different things that could happen? And then see, okay, how relevant is that for our organization? And then translating that into a couple of very basic steps, the monitoring, the protocol setup, and the what if exercises.
What if this tomorrow happens? What would we actually do? What would it mean for our audiences, for our executives, for our stakeholders? And how do we translate that? Not in a big plan and a long, you know, SOP, but simple steps. And most of the time it will not be a, you know, a communications team of 25 people. It will be one, two, maybe just split up the rows.
What do we do if tomorrow there’s a deep fake popping up? How will we do the triage? Because you don’t have to react to everything. And if we decide to react, who are the first people that we need to inform? Sometimes it’s getting really the very basics in place. It’s already much more than 90 % of the other people that are actually not looking at this for the moment.
Neville Hobson
Sound advice. And of course, we now come to that point of the question we didn’t ask you, that you wished we had or hoped we would. What would that be if there is one?
Philippe Borremans
That would be, “Philippe, when do we have another drink in Brussels?”
Shel Holtz
Not soon enough.
Neville Hobson
I like that. That’ll do. I like that one. Yes. It was a long time since we had that drink in Brussels, Philippe, so we ought to.
Philippe Borremans
Or when do we meet face to face again? Because it’s been that’s been a very long time as well. So yeah.
Neville Hobson
Well, you and I are in Europe, so that’s easy for Shel to come over here. And in fact, going to the States these days isn’t a very attractive proposition, I think, to many of us over here. But it’s been a terrific conversation, and I think you’ve shared some great insights for our listeners. Where can people get hold of you? How would people find you online?
Philippe Borremans
Well, the main I mean, if they’re interested in the topic, it’s it’s maybe a good idea to subscribe. So I’ve got a free newsletter every week. I talk about risk crisis and emergency comms and AI. And that’s at wagthedog.io. And if they need support before during or after crisis, it’s my corporate website, which is riskcomms.com.
Shel Holtz
And you’re also on LinkedIn, I presume, and sharing your insights there as well. Philippe, it has been terrific. Thank you so much for the time.
Philippe Borremans
Sure, definitely. No, thank you. It was really great seeing you again and we definitely have to find an excuse this year to meet.
Shel Holtz
I would love that.
Neville Hobson
Thank you.
The post AI risk, trust, and preparedness in a polycrisis era appeared first on FIR Podcast Network.
The 2026 Edelman Trust Barometer focuses squarely on “a crisis of insularity.” The world’s largest independent PR agency suggests only business is in a position to be a trust broker in this environment. While the Trust Barometer’s data offers valuable insights, Neville and Shel suggest it be viewed through the lens of critical thinking. After all, who is better positioned to counsel businesses on how to be a trust broker than a PR agency? Also in this episode:
Links from this episode:
Links from Dan York’s Tech Report
The next monthly, long-form episode of FIR will drop on Monday, February 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 498 of For Immediate Release. This is our long-form episode for January 2026. I’m Shel Holtz in Concord, California.
Neville Hobson: And I’m Neville Hobson, Somerset in the UK.
Shel Holtz: And we have a great episode for you today, lots to talk about. I’m sure you’ll be shocked, completely shocked that much of it has a focus on artificial intelligence and its place in communication, but some other juicy topics as well. We’re going to start with the Edelman Trust Barometer, but we do have some housekeeping to take care of first and we will start with a rundown of the short midweek episodes that we have shared with you since our December 2025 long form monthly episode. Neville?
Neville Hobson: Indeed. And starting with that episode that we published on the 29th of December, we led with exploring the future of news, including the Washington Post’s ill-advised launch of a personalized AI-generated podcast that failed to meet the newsroom standard for accuracy and the shift from journalist to information stewards as news sources. Other stories included Martin Sorrell’s belief that PR is dead and Sarah Waddington’s rebuttal in the BBC radio debate. Should communicators do anything about AI slop? And no, you can’t tell when something was written by AI. Reddit AI and the new rules of communication was our topic in FIR 495 on the 5th of January, where we discussed Reddit’s growing influence. Big topic, and I’m sure we’ll be talking about that again in the near future. On that day, we also published an extra unnumbered short episode to acknowledge FIR’s 21st birthday. Yes, we started out on the 3rd of January 2005 and that’s a lot of water under the bridge in that time, Shel. And I think we had quite a few bits of feedback on that episode.
Shel Holtz: People dropped in and shared their congratulations. There were way too many of them to read and many of them were very, very similar. Just to share one, this is from Greg Breedenbach who said, “Congratulations, what a feat. I’ve been listening since 2008 and never got bored because you managed to keep it engaging and relevant. Thanks for all the hard work.”
Neville Hobson: Great comment, Greg, thank you. So for FIR 496 on the 13th of January, we reported on the call by the PRCA, the Public Relations and Communications Association for a new definition of public relations. We explored the proposal’s emphasis on organizational legitimacy, its explicit inclusion of AI’s role in the information ecosystem, and the ongoing challenge of establishing a unified professional standard that resonates across the global communications industry. That had a few comments.
Shel Holtz: That got a few comments. Gloria Walker said, “Attempts have been made from time to time over the decades to define and redefine PR. Until there is a short one that pros and clients and employers can understand, these exercises will continue. Good luck.” And Neville, you replied, you said, “You’re right, Gloria. This debate comes around regularly. One interesting precedent was the Public Relations Society of America led effort in 2011 in a public consultation to redefine PR. That process was deliberately open and received broad support from professional bodies and their members around the world.” And Philippe Borremans out of Portugal had a comment. He said, “Thanks for the mention of my comments. Hope it helps in the definition exercise.” Philippe, of course, wrote a LinkedIn article in response to the definition. There were some other comments in this episode, including one from Marybeth West. You can go find that on LinkedIn. This was a rather lengthy exchange between Marybeth and you that is just too long to include here.
Neville Hobson: Great. And then in FIR 497 on the 19th of January, that’s just a week ago before we record this current episode, we unpacked the latest AI radar report from BCG, used to be known as Boston Consulting Group, that says AI has graduated from a tech-driven experiment to a CEO-owned strategic mandate. We examined this evolution that places communicators at the center of a high-stakes transition as AI moves from pilot phase into end-to-end organizational transformation. One comment we had to that:
Shel Holtz: From our friend Brian Kilgore, who said, “Haven’t read the report yet, but will soon. Sometimes when I read a link first, I can’t get back to the comments.” But he continues to say, “I once took a job that was structured by Boston Consulting Group. My employer used the BCG report as the basis for the job description. It worked out well.”
Neville Hobson: Excellent. So that’s where we’re at. Some good stuff since the last episode. And of course, now we’re about to get into the current.
Shel Holtz: And yesterday I published the most recent Circle of Fellows, the monthly panel discussion with members of the class of IABC Fellows. This one was on mentoring. It was a fascinating conversation featuring Amanda Hamilton-Atwell, Brent Carey, Andrea Greenhouse, and Russell Grossman. The next Circle of Fellows—mark it in your calendar because this one’s going to be very interesting and maybe even controversial—this is going to be at noon Eastern time on Thursday, February 26th and it’s all about communicating in the age of grievance. This will feature Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh.
Neville Hobson: You’re such a tease, Shel, with that intro, I have to say. So yeah, go sign up for it, folks. I’d also like to mention that in December, IABC announced the formation of a new shared interest group, or SIG, that Sylvia Cambier and I are leading. It’s called the AI Leadership and Communication SIG. And I’m delighted that we have attracted 70 members so far. I’m also delighted to share that our first two live events are scheduled for February. On the 11th of February, we’re hosting a webinar for IABC members to introduce the SIG, explain why we formed it, what it stands for, and how it approaches AI through a leadership and communication lens. Then on the 25th of February, as part of IABC Ethics Month, we’re hosting a webinar on AI ethics and the responsibility of communicators. This is a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. We’ve included links in the show notes so you can learn more about these events and sign up as well if you’d like to.
Shel Holtz: Sounds great, I’m planning to attend those, schedule permitting. And that wraps up our housekeeping. Hooray! It’s time to get into our content, but first you have to listen to this.
Neville Hobson: Our lead discussion this month is the 2026 Edelman Trust Barometer, which landed last week at the World Economic Forum in Davos, Switzerland, with a stark framing: trust amid insularity. But before we get into the findings, a quick word on what the Edelman Trust Barometer actually is. Many of you literally may not know why this is significant. The Edelman PR firm has published the Trust Barometer every year since 2000, making this its 26th edition. It’s based on a large-scale annual survey across 28 countries, tracking levels of trust in four core institutions: business, government, media, and NGOs, alongside attitudes to leadership, societal change, and emerging issues. Over time, it has become one of the most widely cited longitudinal studies of trust globally, not because it predicts events, but because it captures how public sentiment shifts year by year.
After more than two decades of tracking trust globally, Edelman’s core finding this year is that we are no longer just living in a polarized world, but one where people are increasingly turning inward. That’s that word “insularity” I mentioned earlier. The report suggests that sustained pressure from economic anxiety, geopolitical tension, misinformation, and rapid technological change is reshaping how trust works. Rather than engaging with difference, many people are narrowing their circles of trust, placing greater confidence in those who feel familiar, local, and aligned with their own values, and withdrawing trust from institutions or people perceived as “other.” At a headline level, overall trust is broadly stable year on year. The global trust index edges up slightly, but that masks important differences. Trust continues to be significantly higher in developing markets than in developed ones, where trust levels remain flat or fragile. As in recent years, employers and business are the most trusted institutions globally, while government and media continue to struggle for confidence in many countries.
What is notably sharper this year is the distribution of trust. The income-based trust gap has widened further, with high-income groups significantly more trusting than low-income groups. Edelman also finds growing anxiety about the future. Fewer people believe the next generation will be better off, and worries about job security, recession, trade conflicts, and disinformation are at or near record highs. A defining theme running through the report is what Edelman calls insularity. Seven in 10 respondents globally say they’re hesitant or unwilling to trust someone who differs from them, whether in values, beliefs, sources of information, or cultural background. Exposure to opposing viewpoints is declining in many countries, and trust is increasingly shifting away from national or global institutions towards local personal networks: family, friends, colleagues, and employers. Compared with last year’s focus on grievance and polarization, the 2026 report suggests a further step from division into retreat. The concern is not just disagreement, but disengagement—a world where people are less willing to cross lines of difference at all.
In response, Edelman positions trust brokering as a necessary answer to this environment—the idea that organizations and leaders should actively bridge divides by facilitating understanding across difference rather than trying to persuade or convert. This concept sits at the center of the second half of the report. It’s also worth noting that Edelman’s framing, particularly around trust brokering and the role of institutions, has attracted a number of critical responses. We’ll highlight some of those critiques in our discussion alongside our own perspectives and what this year’s findings mean in practice. Taken together, the 2026 Trust Barometer paints a picture of a world where trust hasn’t collapsed, but it has narrowed, becoming more conditional, more local, and more shaped by fear and familiarity than by shared institutions or common ground. That raises important questions about leadership, communication, and the role organizations are being asked to play in society. So let’s unpack what Edelman is telling us this year. What stands out in the data where it feels like a continuation of recent trends and where this idea of insularity marks something more fundamental in how trust is changing? Shel?
Shel Holtz: Well, we would be remiss if we didn’t acknowledge that this annual ritual has attracted a torrent of criticism over the years. Criticism raises some uncomfortable questions about what we’re actually measuring and, more importantly, whose interests the barometer serves. Now, none of this minimizes the value of the data that has been collected. For the eight years that I have been working for my employer, I have extracted points that I think are relevant and share these with our leadership. I’ve already undertaken that exercise this year. So what I’m about to share with you is a critique, but I don’t want anyone thinking this means you should ignore the report. It just means you should apply some critical thinking as you go over this information.
And let’s start with the most fundamental critique: the methodology and sample selection. Clean Creatives, which is a climate advocacy organization, has documented how Edelman’s country selection appears strategically aligned with the firm’s client base. The United Arab Emirates, for instance, was only added to the trust barometer in 2011, conveniently right after they became an Edelman client in 2010. And wouldn’t you know it, the barometer regularly finds that trust in the UAE government remains among the highest in the world. And by the way, that’s a quote, “remains among the highest in the world,” findings that are then dutifully promoted by state media.
Consider the question: is the trust barometer measuring trust or is it manufacturing it for the C-suite? The issue gets even more problematic when you look at the top of the leaderboard. Six of the highest-ranked governments in recent editions—China, the United Arab Emirates, Saudi Arabia, Indonesia, India, and Singapore—are rated by Freedom House as either not free or partly free. Researchers studying authoritarian regimes have identified what they call autocratic trust bias. It’s a phenomenon economist Timur Kuran calls “preference falsification.” In other words, people don’t exactly feel free to reveal their true opinions when they might face some sort of prosecution for indicating that they don’t trust their government.
And here’s where David Murray’s recent critique hits the nail on the head. David is a friend of mine. He’s a friend of the show and he has been an FIR interview guest. And he published a takedown of what he calls this wearying annual ritual. David points out the sheer absurdity of Edelman’s latest focus: insularity. The 2026 report claims that seven in 10 people are insular, as you mentioned Neville, retreating into familiar circles. Edelman’s solution, as you mentioned again, is the trust brokers. And of course, the report finds that employers are the ones best positioned to scale this trust brokering skill set. But as Murray observes, there’s something deeply hollow about a global PR machine using AI and always-on monitoring to lecture us on the human skill of listening without judgment. It’s a case of “human hires machine to reassure self he is human.”
Now consider Edelman is a $986 million global PR firm whose stated purpose is to evolve, promote, and protect their clients’ reputations. So when the research concludes year after year that business must lead and that my employer should be the primary trust broker, you have to ask: is this research or is this a pitch deck? Is Edelman documenting a phenomenon or are they selling a solution that just happens to require companies to hire more communications consultants to teach conflict resolution training? There’s also the question of academic rigor. Despite its massive influence, Edelman hasn’t made the full data set available to independent researchers. When their 2023 findings about polarization were criticized for lumping democratic and authoritarian countries together, they produced a reanalysis, but only after removing data from China, Saudi Arabia, and the UAE. And, surprise, the core finding—that business must lead—remained intact.
The conflict of interest concerns extend even further. Edelman has been documented working with fossil fuel giants like Shell, Chevron, and Exxon Mobil. They were one of the largest vendors to the Charles Koch Foundation, yet the barometer presents findings about climate change and business ethics without disclosing these relationships. Peer-reviewed research found Edelman was engaged by coal and gas clients more than any other PR firm between 1989 and 2020. When a firm with that client roster tells us that business is the only institution that is both ethical and competent, we should probably raise an eyebrow. Look, I’m not saying the underlying trends—polarization, information chaos, erosion of truth—aren’t real. These are very serious shifts in our reality, but we need to be critical observers of the research. We need to ask who benefits from the conclusion that employers should step into the void left by failing democratic institutions and who profits from the narrative that CEOs, not citizens, should lead societal change? The Edelman Trust Barometer has become the ultimate gathering of elites at Davos telling each other what they want to hear. It provides a veneer of data-driven legitimacy to corporate overconfidence. But if we’re serious about rebuilding trust, we might want to start by questioning the research that so conveniently serves the interests of those who are producing it.
Neville Hobson: Yeah, that’s quite a scathing analysis. I read David Murray’s blog post, really, really good, entertaining read in his inimitable style. One that actually mentions some points that are really right up there with some of the critiques you raised from your narrative—a post by Sharon O’Day. Sharon’s a digital communication consultant based in Amsterdam. I think she’s on the button with most of what she writes. I read her content on LinkedIn frequently. She’s got about 82,000 followers on LinkedIn, so she’s got some credentials and credibility. She talked about this, where her headline is the one that kind of sets the scene for what she writes in an article for Strategic Global. She says, “Employers are the most trusted institution—that should worry you,” says Sharon. She goes into a description of what the report is and what the big finding is about “my employer is now the most trusted institution.”
She warns before internal communicators rush to embrace “trust brokering”—Edelman’s proposed solution to all this—we should ask what kind of trust are we actually talking about? She goes on to summarize what, in her view, Edelman gets right about this. The trust barometer lands strongly, she says, because it tells people what they already suspect, but with graphs. I did like that little bit there. So she talked a bit about the seductive appeal of trust brokering. And I thought this was a sharp analysis. Edelman’s solution is trust brokering: help people work across difference, acknowledge disagreement, translate perspectives, surface shared interests. Employers as the most trusted institution should facilitate this. You can see why this resonates, she says; it offers organizations a constructive role without being overtly political. For internal communicators, it suggests evolution from message delivery to dialogue facilitation. It fits our existing narratives nicely, she says.
But the problem isn’t that this is wrong. It’s that it treats trust as primarily a relational challenge, when in most organizations it’s fundamentally structural. The core weakness, says Sharon, is assuming trust is an emotional state that can be rehydrated with better listening. She says trust is a systems problem, in fact. Workplace mistrust is often entirely rational, she says. People distrust organizations because they’ve watched restructures framed as “growth,” AI introduced without safeguards, workloads expand as headcount contracts, risk pushed downwards while control stays at the top. That’s a pretty keen assessment, I think, of reality in most organizations. And she notes being asked to engage openly feels less like inclusion and more like exposure. Frame trust as sentiment and the solution defaults to messaging. Understand trust as system behavior and the role shifts towards making systems legible: how decisions are made, where constraints sit, what won’t change.
And she then talks about when insularity becomes moral judgment, reminding us this now applies to 70% of people globally, according to the trust barometer. The danger: this subtly relocates responsibility. If trust is low because people are insular, help them become more open. But what if mistrust is entirely rational? And she warns again that trust isn’t a moral virtue; it’s a calculation people update based on what organizations do, not what they say. Trust in an employer is not the same as trust in a democratic institution. It’s shaped by dependency as much as belief. Your employer controls your income, your professional identity, and often your healthcare and visa status. That changes the dynamic.
So she winds up talking about the hard truth. The most worrying thing is that people trust their employer more than anything else—is that they may not have anywhere else left to put it. That’s not a mandate to become society’s repair shop, says Sharon. It’s a warning about what happens when you’re the last institution standing and you cock it up. For communications, a task isn’t to become trust brokers. It’s to tell the truth about the system people are inside, how it works, where constraint sits, what won’t change and why. Trust collapses when people stop expecting honesty about how decisions get made and who benefits.
I think, though, that last bit in particular is a hard truth dose of reality. I suspect where in a sense she’s saying—I’m interpreting her words here—that communicators are part of the game, let’s say. They are not telling the truth about the system people are inside. And that’s quite an indictment to slam that down on the table in the midst of this. Yet, I think it’s a valid point to raise for discussion, whether you disagree or agree. It’s worth considering what she says. Are we all who work in large organizations in particular to communicate what the organization is doing, what the leaders are saying, what’s happening… are we simply regurgitating the top-down perspective of an untruth? Maybe that’s one way of putting it. So it adds to the questioning of Edelman’s motives or their responsibilities. I think what you noted—people like David Murray saying—have done a pretty good job at that. I’m not questioning that aspect of it all.
I have found, largely, what Edelman talks about to be valid, notwithstanding those questions about their motives and often undisclosed relationships. Because after all, they interview each year 20,000-plus people in God knows how many countries. And these aren’t folks who have axes to grind themselves in the same way, let’s say, if it’s alleged that Edelman does. So I think it has credibility in that regard. I’m equally aware of a lot of the criticism about this that questions the credibility. I don’t do that the same way others do. I have found, and indeed the same with this current report, value in the information that Edelman have put together that they are sharing. So it’s useful to get a sense of this, particularly the annual changes in sentiment that we’ve reported on this in For Immediate Release throughout the years. I can remember actually being at the very first Edelman Trust Barometer when Richard Edelman was in London—that was in 2000, I think, or 2001. Beginning of the century, 26 years ago anyway. So it is interesting, Shel. And I think the criticisms are worthy of debate, not dismissing them unless you are quite clear you’ve got something else to say. The report is a dense document. It’s quite detailed. I found a good place to start to get a sense of what it’s all about is the top 10 findings, the snapshot views of each of the top points that is under the heading “Trust Amid Insularity.” So it’s definitely worth paying attention to, putting it in the context of what the critics say.
Shel Holtz: And frankly, the longitudinal nature of this research—what David Murray called “wearying”—is actually where much of the value comes from: the ability to track change over time in any research. I mean, you look at engagement studies that companies do among their employees. If you couldn’t see how any element of that survey has improved or declined over time, it’s of far less value than getting this one snapshot in time for a single survey. So there’s great value there. And like I say, I think there’s great value in a lot of the data in this survey. I mean, the fact that the focus is on insularity should not be any surprise. We’re seeing this every day. It’s interesting that… I think it had to be 35, 40 years ago, IABC’s Research Foundation, the lamented long-gone IABC Research Foundation, did a study on trust. And I remember the definition that they gave trust. We were talking earlier about the definition of PR; the definition of trust is pretty fixed. It’s the belief that the party in question is going to do the right thing. And so it’s that simple. And the question becomes: what is the right thing? Among people who are inside their bubbles, that insularity, what do they believe the right thing is? And that is probably very different from people who are in a different bubble.
And this, I think, is where that “trust brokering” idea has some legitimacy, even if it may not be presented in the best way. I think telling the truth is… that’s not what we need to be doing in order to address this. If we’re not telling the truth, then we simply have no stake in this game. You can’t go anywhere from there. But if you’re telling the truth, how do you get that into the heads of the people who are not paying attention to you? They’re listening to people who say you can’t trust them. And I think that comes through engagement, not through publication, not through telling. To some extent through listening—you must do that to find out what their issues are, what they do believe. But at some point you have to start engaging with people. I mean, the profession is called Public Relations, not Public Content Distribution. And those relations have to have some give and take, some two-ways. So if you have people who don’t trust you and are misinterpreting or are listening to false information being delivered by people who have an interest in taking your organization or your institution down, you need to reach out to those people and start to engage them. And I absolutely agree with whoever it was said that this is the direction that we need to be heading in. I think they were talking about internal communications being more dialogue, but I think that’s true of the external side too.
Reaching people who are in bubbles is extremely difficult. I’ll tell you, I was having a conversation—this is a friend of mine who I have learned is on the opposite political spectrum from me. And I told him, “You know, I watch Fox News on a fairly regular basis. I find it important to know what the people on the other side of the political spectrum are hearing, what they believe, what they think, so it can inform my view of things.” It doesn’t change it, but it certainly informs it when I’m having conversations or I’m considering how to reach somebody. I said to him, “You ought to be doing the same. You ought to be watching some of the media that presents the views that are contrary to your own and understand them.” And his answer to me was, “Stop watching Fox News.” He felt that I should stay in my bubble. So this is a pretty entrenched perception that people have. And it’s become very ingrained in the cultures of these insular regions, if you want to call them that. How do you reach people? I think that’s the challenge for people in communications right now: how do you reach the people who just are not interested in hearing what you have to say? They want to hear what your critics have to say, and that’s all they’re listening to.
Neville Hobson: Yeah. That makes sense. And indeed, that I think supports one of the key elements of this latest report, which is that traditionally in organizational communication, part of your goal is to get everyone lined up with the same message. We’re all singing off the same sheet and it’s all unified and we go forward. This is a change. This is not about that. It’s not about aligning people who are different; it is about understanding the differences and still being able to engage with them, recognizing their differences. And that makes complete sense to me in the current geopolitical environment, because I believe that what we’ve seen over the past few years—and the driver for this unquestionably is what’s happening in the States since Donald Trump became president for the second term—that as Mark Carney spoke in his speech at Davos at the World Economic Forum, that this isn’t a transition, we’re going through a “rupture.” I’m not sure I… it was very good, very good. I’m not so sure it is that—maybe it is a transition, it doesn’t matter what you call it—but the reality is that people are afraid in many countries. Just watch the TV news and you’ll be scared most days, particularly when you see things that you couldn’t imagine happening in some of the countries where it is happening, notably in the US, what’s happening there with crackdowns in various parts of society. It’s truly extraordinary.
I think that is a big influence on this insularity, people withdrawing. Yet I think where it talks about people wanting to engage with people with similar views, similar beliefs and so forth, not different beliefs… I seem to remember a few years back—I’ve forgotten which year it was—when the Edelman Trust Barometer of that particular year published something that was quite radical, where the most trusted person in the world, if you will, is “someone like me.” I remember that. This is that, is it not? It’s someone like me, except the dynamics are very, very different to what it was back then. And I think one of the things I feel that this is a thing to really pay close attention to, which is aligned with what you said about engaging with people outside of different individual bubbles, is that recognition of difference. It is the fact that people need pushing in the right way. And it is the fact—and again, this comes back perhaps to Sharon O’Day’s critique—that we’re not telling the truth. That we need to tell a different version of the truth, if that doesn’t sound kind of weird. There is always more than one version of the truth. And I think: which one do you trust? And that’s, I think, a big challenge for communicators because it surely would be easy for a senior-level communicator, particularly if they’re an advisor to the C-suite, to see when the messaging coming out of the C-suite is simply not the right messaging. Not saying that they’re not telling the truth, far from it. What they believe as the truth may not actually reflect what is happening. And that’s where listening really becomes key.
So it means, I suppose, that communicators can rethink this whole structure in light of what Edelman’s saying, but not exclusively because of this. But take a look at this: one of the key findings, the first one that Edelman mentions, “insularity undermines trust.” And that’s something that I grabbed from this when I wrote my blog post about this—a kind of reflective post I wrote a few days ago—of what insularity, when people withdraw into themselves and stop engaging with others with different views, they often can undermine the authority within an organization of what the leaders are trying to do by not cooperating, by just simply not doing it or even actively dissing it or whatever it might be. Is that a new thing? Maybe it’s not, but it certainly has a mass impact if you see that sort of thing going on. “Mass-class divide deepens” is another one that they talk about—the gap between high and low-income groups. So these are the bigger picture issues in our society. And yet we’re seeing things going on because of these changes in geopolitics, I suppose, that this is not a good thing. And institutions are falling short. The four big institutions I mentioned at the start are falling well short on addressing this.
The phrase “trust brokering”—I really don’t like that, to be honest, Shel. It sounds gimmicky. It sounds like a catchphrase that someone’s come up with, which I suspect is what’s prompted a lot of the criticisms of it. I’ve even seen some people say, “Wait a minute, trust broker… isn’t that what communicators have been doing for years?” Now we’re calling it trust brokering. So we need to get past this kind of labeling confusion, I think, and look at what we must do to help leaders in particular do the right thing in their organizations and how they’re communicating things and enable, if you like, empower properly communicators to take all this forward. But there’s lots to pick from this report, I think, Shel.
Shel Holtz: Yeah, I wonder how many PR agencies are going to announce soon that they are launching their “trust brokering units” now available to engage in your organization. I’m going to invoke the IABC Research Foundation one more time. Their seminal work was the Excellence Study—Excellence in Public Relations and Communications Management—outstanding effort. And the primary work that came out of that was a review of the literature on all of this. So a lot of academic stuff. It’s a rather lengthy book. I’ve read it; I still have it; I still refer to it. But one of the things that I learned when I was reading this way back when it came out is this notion of “boundary spanning.” It’s an academic term from PR in the academic world. And it suggests that public relations people really need to understand the perception and the perspectives of the opposition so well that when they talk about it in the organization, people are going to be suspicious that the PR people have switched sides. You understand it so well that you can basically talk like the opposition does and convey their concerns and their critiques as if you were one of them. I don’t know how many public relations people are doing that these days. Given the results of this research, it seems to me that boundary spanning is becoming a necessary tactic for public relations practitioners. I think it’s important that if that’s not something that you have looked into and this is the work that you do as a communicator, something to pay attention to.
Neville Hobson: Yeah, I would agree with that. So there’s lots to absorb in this. We’ve touched on the kind of prominent points, but there’s one that struck me as an interesting one on Edelman’s list of the top 10 issues. There’s a tenth one. There’s a last on their list: “Trusted voices on social media open closed doors.” And I thought that’s an interesting take on that. They say people who trust influencers say they would trust or consider trusting a company they currently distrust if it were vouched for by someone they already trust. Think about that. That’s interesting, because we’re seeing separately to the Trust Barometer, influencers as a group, let’s say broadly speaking, under threat for lack of credibility in many cases. Some of the face-palming things that I’ve read about influencers doing or saying in recent months has been, you know, face-slap—you whack your hand on your head. But this is true, in my view, that that makes sense to me. And maybe that is an easy way for communicators to engage with people, maybe in slightly more open ways than they have in the past to enable that kind of thing. So again, it’s a thought point, if you like, that’s worth considering, even though it’s not high up on Edelman’s top 10 list—it’s the 10th. Worth paying attention to though, I think.
Shel Holtz: Absolutely. And you see the opinion polls showing a shift in support or lack of support for one thing or another based on what some of the prominent influencers are saying when they change their view. Looking at the “bro-verse” in the podcast world—people like Joe Rogan, for example—who were very supportive of Donald Trump when he was running for president, and you look at the independent vote and it was very supportive of Donald Trump. And the bro-verse has shifted with what’s going on in Minneapolis and some other cities. You’re hearing Joe Rogan say, “What is this? The Gestapo in the streets now?” And now you’re starting to see that shift in opinion among independent voters away from Trump. Now this is a correlation, not a causation. But still, it’s interesting and seems to validate that 10th point among those top 10 from Edelman.
Neville Hobson: Agree. So lots to unpack here. We’ve touched… we scratched the surface basically and shared some opinions of our own. There’ll be links to the report and some other content in the show notes if you want to dive into it.
Shel Holtz: And we’re going to switch gears now and talk about artificial intelligence for at least the next two reports. These are very complementary reports—the one I’m about to share, then after Dan York’s report, Neville, your story, very, very complementary. So let’s get started. There is a striking disconnect happening in corporate America right now, and it comes down to a shift in perception. Leaders think their AI rollouts are going great, while the view from the cubicle is “not so much.” Let’s start with the numbers. A Gallup survey of over 23,000 workers found that 45% of American employees have used AI at work at least a few times. Sounds encouraging, doesn’t it? But wait—only 10% use it every day. Even frequent use sits at just 23%. So despite a year of this breathless hype and massive corporate investment, actual day-to-day adoption remains marginal. And here’s what may be the most telling statistic: 23% of workers, including 16% of managers, don’t even know if their company has formally adopted AI tools at all. Now, think about that. Nearly a quarter of your workforce is so disconnected from the organization’s strategy that they can’t say whether one even exists. This gap suggests that shadow IT problem where employees are using personal tools like ChatGPT while remaining completely unaware of their employer’s official path forward is what we’re probably seeing in a lot of organizations.
The adoption pattern breaks down along predictable and frankly troubling lines. Usage is concentrated where you would expect: Technology organizations (76% of employees are using AI), Finance companies (58%). But in retail and manufacturing, those numbers crater: 33% in retail and 38% in manufacturing. AI is languishing in the same place as it always does—among the people already closest to the technology. Now, contrast this with JPMorgan Chase, which has become the poster child for successful enterprise AI adoption. When they launched their internal LLM suite, adoption went viral. Today, more than 60% of their workforce uses it daily. That’s six times the national average. Now, what did JPMorgan do differently? Their chief analytics officer, Derek Waldron, says they took a “connectivity-first” approach. Instead of giving employees a login to a generic chatbot and calling it a day, they built AI that actually connects to the bank’s internal systems—their customer relationship management package, their HR software, their document repositories. An investment banker can now generate a presentation in 30 seconds by pulling real internal business data. The bank also understood the Kano model of satisfaction. They made the tools genuinely useful and voluntary. They didn’t mandate usage. They bet that if the tool solved a problem, word would spread organically. They also ditched generic literacy training for segment training—that is, teaching people how to use AI for their specific work.
Now here’s where things get a little uncomfortable. JPMorgan has been candid about the consequences. Operation staff are projected to decline by 10%. While new roles like context engineers are emerging, the bank hasn’t promised that everyone will keep their job. Meanwhile, at most other organizations, we’re hitting a “silicon ceiling.” BCG, formerly Boston Consulting Group, found that while three-quarters of leaders use generative AI weekly, use among frontline employees has stalled at 51%. The problem is a leadership vacuum. Only 37% of employees say their organization has adopted AI to improve productivity. A separate Gallup study found that even where AI is implemented, only 53% of employees feel their managers actively support its use. Then there’s the trust issue. Nine in 10 workers use AI, but three in four have abandoned tasks due to poor outputs. The issue here isn’t access; the issue is execution. People don’t know how to prompt or critically evaluate the results. Worse, 72% of managers report paying out-of-pocket for the tools that they need to do their work using AI. In response, some companies are taking a hard line. Meta has announced that starting in 2026, performance reviews will assess AI-driven impact. In other words, AI use is no longer optional at Meta. So where does this leave us? We have bullish leaders making massive investments while their workers are either unaware of the strategy or worried that using AI makes them look replaceable. The fundamental problem is that companies are deploying AI as if it’s just another software rollout. And it is not. It requires rethinking workflows, investing in specific training, and building tools that connect to real business data. The gap between AI hype and actual adoption isn’t going to close until organizations figure that out.
Neville Hobson: There’s a lot in there, Shel, that is interesting, I have to say. I think JPMorgan is a use case that’s definitely worth studying what they’ve done. I’m reading the article that appeared in VentureBeat talking about that. It talks about “ubiquitous connectivity”—great, two words put together—plugged into highly sophisticated systems of record. You mentioned how integrated this was to all their internal systems. So you can see some things there that you don’t hear some other companies explaining things that way. The forward-looking approach… so they’ve got leaders who are treating this the right way. It talked about, as you said, they didn’t just enable this and then say, “here you go.” They literally developed it as an ongoing thing in conjunction with employees, which is really good. I think, though, that the alarm bells ring in the first part of your report, when you were talking about how employees say they’re fuzzy on their employer’s AI strategy, with many not knowing whether their employer has one or not. I’d like to think that that’s not the majority, but I fear I may be misplaced with that view, because the ones that don’t do this—in other words, they do it the right way—are the ones who are reaping the benefits. And there are lessons, simple lessons, to learn from that.
Workers who use AI tend to be most likely to use it to generate ideas and consolidate information, Gallup says in introducing their survey report. That makes sense, doesn’t it? That they are… so you’ve got to enable that in an organization. I think there’s more we’ll talk about this when we get to the report you mentioned that we’ll talk about after Dan’s report that expands on this quite significantly. But there are some lessons to be learned from some of the things we discussed on this podcast in recent episodes. You mentioned Boston Consulting Group, where we’ll talk a bit more about the survey they did that paints a very different picture on this. Still, I have to say I’ve seen other reporting, including some of the ones you shared here, where it does talk about the huge gap between the views of leaders and organizations compared to the opinions of employees in those organizations on the state of AI and the benefits it’s supposed to bring. I think the Harvard Business Review report you shared as well—there’ll be links to that in the show notes—that says, “Leaders assume employees are excited about AI; they’re wrong,” says the Harvard Business Review. And they’ve got some really good credible data here to back up that review. The higher you sit in the organization, the rosier your view. Is that not true of many things in an organization, I wonder, that you’re insulated from some of the reality? Is there something communicators can do to alleviate that little problem? I suspect so. These are disconnects that do not help the organization if you really do have blind spots like that, I think. So it’s good to see this. The HBR talks about a survey they did—1,400 US-based employees. 76% of execs reported their employees feel enthusiastic about AI adoption. But the view from those employees was not that at all—just 31% of them expressed enthusiasm. That’s a bit different to what the execs are saying. So I wonder how we get to that reality and then add that to the climate of trust we discussed in the Edelman Trust Barometer and the landscape’s looking like a very tricky one for communicators in a wide range of areas. Add this to that list of concerns.
Shel Holtz: Yeah, this report has really been focused on adoption among employees. You’re going to take a different spin on this after Dan’s report around the perception gap between executives and employees. But I think it comes down to mismanagement of the rollout of AI in, I would have to say, most organizations. And I think it’s a lot of different factors contribute to this, but leaders need to be paying more attention to what they want from AI. I mean, is it really just evaluating tools that have AI baked into them that we can bring into the organization? Or is it rethinking the organization writ large based on what AI can do in a more organic way? I love the point out of JPMorgan that an analyst can now create a deck in 30 seconds because the AI has access to all the internal data. That’s valuable. An employee can say, “That is something that is worthwhile to me.” Whereas you give them access to Copilot because you have an Office 365 contract in your organization and everybody has access to it and say, “Here’s Office 365, godspeed.” And you provide basic training to everybody that says, “Here’s how you write a prompt and here’s how you look for hallucinations and blah, blah, blah.” But it doesn’t tell somebody in a particular role what this can do for them. They’re going to leave that saying, “Okay, I think I can craft a good prompt now. Why would I want to do that? What would I prompt for?” I think this requires much more attention on the part of leadership and much more commitment to viewing this as a change initiative that has to be led from the top.
Neville Hobson: Yeah, I think your point you mentioned earlier about this being to do with adoption and rollout as opposed to perception… but they’re both connected according to Harvard’s report anyway. They talk about: “When organizations see AI adoption as a way to make work better for employees and communicate that as opposed to as a pursuit of efficiencies and productivity, AI efforts gain traction.” And that’s repeated in many of the surveys that we could talk about. They communicate a shared purpose, involve employees in shaping the journey, and move people from resistance to enthusiasm. Makes total sense to me. The report also, the Harvard report, talks about employee-centric firms. I thought every firm was an employee-centric firm, but maybe I got that wrong. Employees on average are 92% more likely to say they are well-informed about their company’s AI strategy and 81% more likely to say that their perspectives are considered in AI-related decisions. That’s a huge percentage, I have to say. 70% more likely to feel enthusiastic and optimistic about AI adoption, reporting emotions such as empowerment, excitement, and hope rather than resistance, fear, or distrust. Communication, execution—that’s the kind of pathway, I suppose, or execution and communication, both hand in hand. So it’s… slow employee adoption of AI is clearly the norm, by my judgment, based on what you’ve been saying, what I’ve listened to, what I’m seeing in some of these reports. Makes me wonder: surely it’s a known status, if you like, that communicators can get a hold of and do something about, I would have thought. So would we expect to see a change in that area? I hope so.
Great report, Dan. Thanks very much indeed. I enjoyed listening to your assessment of Wikipedia over the past 25 years. I’m a huge user of Wikipedia and I’m as conscious as you are and many others of some concern about the challenges Wikipedia is facing with misinformation, disinformation, AI, the works getting involved. Looking with interest at how Wikipedia is addressing some of those things. I receive a lot of communication with you; I’ve been a donor for years to support Wikipedia. I’m pleased to see them, I guess, recognizing the shifting landscape and doing something about including AI in some form in terms of the editorial or the editing elements of content on Wikipedia. It’s a challenge without question. So your take on being an editor all those years is interesting, Dan. I’ve done a bit of that, nowhere near as much as you have. And it is interesting… I come across things I read on Wikipedia—I do read it quite a bit when I’m looking for information—that I will see something and think, “That’s not right.” And I might propose an edit in the talk pages. Rarely do I dive in and edit unless it’s something so obviously wrong, unless I’ve got… if I don’t have a source I can cite. So yeah, it’s interesting. And I remember you mentioning before your live editing streams on Twitch. They’re pretty cool. Yeah.
Shel Holtz: I remember watching those during the pandemic. That was fun.
Neville Hobson: Yeah, so great recap, Dan, thanks very much, worth listening to. So let’s continue the conversation then on the views of CEOs, how they differ from employees in AI introduction. I’m going to reference a Wall Street Journal story that talks about a survey seeing that “CEOs say AI is making work more efficient; employees tell a different story.” Much of the public narrative around generative AI in organizations has been framed as a productivity story—one where AI is already saving time, streamlining work, and delivering efficiency at scale. We’ve touched on a lot of that in your earlier report, Shel, our conversation there. But a recent Wall Street Journal report suggests there’s a growing disconnect between how senior leaders perceive AI’s impact and how employees are actually experiencing it day to day. So the Journal’s reporting draws on a survey by the AI consulting firm Section, based on responses from 5,000 white-collar workers in large organizations across the US, UK, and Canada. The headline finding is stark: two-thirds of non-management employees say AI is saving them less than two hours a week or no time at all. By contrast, more than 40% of executives believe AI is saving them eight hours a week or more. There’s a disconnect, it seems to me.
Beyond time savings, the survey highlights a clear emotional divide. Employees are far more likely to describe themselves as anxious or overwhelmed by AI, while senior leaders are more likely to say they feel excited about its potential. Many workers say they are unsure how to incorporate AI into their roles, and that whatever time is saved is often offset by having to check outputs, correct errors, or redo work. At the same time, companies are continuing to invest heavily in artificial intelligence, betting that it will drive future productivity and profit growth, even as evidence of near-term financial returns remains limited. Separate CEO surveys cited by the Journal suggest that only a small minority of leaders say AI has yet delivered meaningful cost or revenue benefits. The Journal also points to real-world examples where ambitious AI deployments have required human correction or reversal, reinforcing the idea that in practice, AI adoption is uneven, unpredictable, and highly dependent on context, skills, and judgment. This picture sits alongside other research we’ve discussed on For Immediate Release. In FIR 497, we talked about BCG’s AI radar report—that’s Boston Consulting Group—which argues that AI has moved beyond experimentation and is now a CEO-owned strategic mandate. That same research also places communicators at the center of managing expectations, trust, and organizational change. We’ve also seen consistent findings, including in the report you highlighted, we discussed literally a few minutes ago on slow employee adoption of AI, showing that while awareness of AI is high, employee adoption and understanding lag well behind leadership ambition.
Taken together, this raises an important tension. At the top of organizations, AI is increasingly seen as transformational and inevitable. On the ground, many employees are still grappling with how it fits into their work and whether it’s genuinely helping them do their jobs better. So what does this divergence between executive optimism and employee experience reveal about how AI is being introduced, communicated, and governed within organizations? Is the human side of AI adoption the real constraint on its promised productivity gains?
Shel Holtz: It’s all of this data is fascinating. And one of the things that strikes me is that we tend to look at data from research, surveys, reports, studies about AI in business. What about people just generally—what do they think about AI just as people living their lives? And there was a study that came out from Pew—this was just last September, so this is current data. I know four months is a million years in AI life, the AI life cycle. But still, this is fairly recent data. And what they found—and I’ll skip numbers and just give you some highlights here—that, and this is a study out of the US, Americans are much more concerned than excited about the increased use of AI in daily life. A majority say they want more control over how it’s used in their lives. Far larger shares say AI will erode rather than improve people’s ability to think creatively and form meaningful relationships. People are open to letting AI help them with their day-to-day tasks, but they don’t support it playing a role in personal matters: religion, matchmaking… more open for data analysis, like weather forecasting, things like that. But they also think it’s important to be able to tell if pictures, videos, or text were made by AI and humans, but they don’t trust their own ability to spot AI-generated stuff. Now, what you have to think about when you hear that this is how people who are just out there living their lives and outside of the context of work, this is how they feel—then they go to work. And they’re told, “AI, it’s going to be great.” And they bring all of these perceptions from their regular lives into the office, into the workplace. And that’s an impact, too. And I think this is something that internal communicators and leaders have to take into account when… I mean, this expectation that AI is going to make your job easier, it’s going to make your product better, whatever your output is, it’s going to make that better… it’s just going to make everything rosy. And you’ve already got these biases based on perceptions just from life. We have to take that into account in our communication. I don’t think this was something anybody was thinking about when we were first introducing it because there was no research yet. It was as new to people living their lives as it was to people doing their jobs. But now you have these perceptions that have been formed about AI as just something that’s there as part of life. And if we don’t factor that into the communication that we do around AI in the workplace, we’re going to struggle to get people to trust this and to figure out how to employ it to make their work better and to support the goals of the organization.
Neville Hobson: Yeah. So the big question then is: what needs to move the needle for communicators then to grasp this challenge, let’s say? I think when we discussed BCG’s AI Radar report, where the clear message there was “AI is now no longer just experimenting and experiments and is now CEO-owned strategic mandate.” So CEOs are taking over control of that. Certainly, with the investment going into AI and organizations. And indeed, one of the findings in BCG’s report was how success in deploying AI and the ROI on that deployment is now a core measure for CEO performance, it said. Well, that takes it up to a whole different level. So it presents opportunities, I think, for communicators to help that CEO achieve the goals he’s going to be measured on by communicating to employees. So this kind of circles back to what we were discussing earlier. And I think if employees on the ground are still grappling with how it fits into their work—and let’s set aside the survey saying CEOs are in charge of all this now, this is great, everything’s going to be wonderful… reality right now today, if they’re still grappling with how it fits into the work, then that needs to be addressed. And you’ve introduced an interesting element to that picture, Shel, where employees of an organization are exposed to all the negative commentary about this externally in their lives generally. They bring that to work with them and encounter what they see there. And they hear the CEO saying, “this is all great.” So these are genuine issues that must be addressed, otherwise, as you say, trust is going to be lacking all the way. And you put that in the context of Edelman’s Trust Barometer and trust and shifts in that… and it’s not a pretty picture at all, I don’t think.
Shel Holtz: Yeah, in the framework for internal communications that I developed—it’s the subject of the book that I’m working on—one of the key roles on a day-to-day basis for internal communicators is consultation, and that’s consultation up the organization. And I think this is an instance where that role is paramount. We need to be talking to our leaders about this. The fact that the BCG radar report says that this has become a CEO issue doesn’t mean that every CEO has done that. I think there are a lot of CEOs who see this—still see this—as an IT issue. And even if they’re using it in their jobs, they don’t think it’s something that they need to be leading; it’s something they think that their CIO needs to be leading. And I think we need to present this data to our leaders. I think we need to talk about why this needs to be led by the business and not one of the support teams. Consultation is what we need to be doing at this stage in addition to maintaining the drumbeat of why this is effective and how you can use this with the frontline employees who are actually going to be using these tools to make a difference in the organization.
Neville Hobson: Yeah, I would agree with that. So therefore the answer to the question I posed when they finished the intro to this—is the human side of AI adoption the real constraint on its promised productivity gains?—I guess would be yes.
Shel Holtz: I would say absolutely yes, and I think that’s where organizations need to be shifting their investments. And I think I saw data that says they are shifting their investments. I think something like 60 or 70% of what organizations are investing in AI is now focused on the people in the organization.
Neville Hobson: That’s a good move.
Shel Holtz: Yep. Well, let’s leave AI behind for a bit and talk about something a little more strategic in the internal communication world. And that’s “alignment,” which has become one of those corporate North Stars that everyone nods at, but few organizations actually achieve. And there’s a paradox at the heart of this. The very act of trying to force alignment—meetings, memos, check-the-box town halls—can make the disconnect worse. Let me start with three simple definitions that frame the alignment problem. Let’s start this discussion with these definitions, courtesy of Stephen Waddington, whose PR credentials are far too many to list here. He says that leadership is the role of setting strategy and goals. Management is the process of measurement and continual improvement against those goals. Execution is delivery against those goals. All right, a pretty simple model there, right? Leadership, management, and execution. But here’s what happens in practice: one of these three almost always breaks down and more often than not, it’s alignment—the invisible thread meant to tie strategy, management, and execution together is what frays first.
Now Zora Artis, who has been a guest on FIR interviews, and Wayne Asplund have been studying the alignment problem for years. Their latest research, the CLEAR Leaders Project, revisits strategic alignment because despite how important it is, the same problems keep recurring. They conducted confidential interviews with senior leaders across communications, HR, strategy, and operations to explore how alignment is understood, practiced, and experienced in organizations today. And here’s the uncomfortable finding: seven years after their previous benchmark study, the gap between alignment in principle and alignment in practice is just as wide as it always has been. It’s universally valued yet almost never achieved. Now think about what this means. We’re not getting better at this and we should. I mentioned before that consultation is one of those daily activities in the ring around my framework circle. So is alignment. And despite all the strategy decks, the town halls, the carefully crafted vision statement, this problem persists. Why? Because senior leaders live inside the strategy. They’ve shaped it, debated it, and refined it, but that proximity breeds a dangerous assumption: the closer you are to a strategy, the more you assume its clarity is shared. What leaders often hear as consensus is actually silence. And in too many organizations, silence is misread as buy-in. Now here’s the thing about strategy: it doesn’t cascade like water; it distorts as it moves. It’s shaped by language, culture, experience, and hierarchy. A strategy that’s crystal clear at the top becomes a muddled set of ideas by the time it reaches teams on the ground. Ownership gets lost, accountability blurs, execution slows.
Now, Zora and Wayne’s original 2018 study of more than 200 senior communicators found that only 35% felt their organization was aligned to its corporate purpose. Only 40% used corporate purpose as a key part of employee communications. Think about that. We define the purpose of the organization; only 40% use that purpose as a part of their communication with employees. Now, fast forward through a pandemic, through massive technological disruption, through all the lessons we supposedly learned about clarity and communication, and the numbers still haven’t meaningfully improved. So what is this paradox? The very mechanisms we use to create organizational scale—you know, we subdivide work, we create functional specialization, we establish hierarchies—these are the mechanisms that fragment the information, decision rights, and incentives that guide individual decisions. We create silos to manage complexity, and those silos then work against our ability to align. Research from Strategy and Business frames it differently but arrives at the same place. When strategies aren’t implemented effectively, leaders tend to view their people as “irrational.” But workers and managers are actually rational actors. Their choices reflect sensible decisions in the context of what each of them knows and understands. The problem isn’t the people; it’s the organizational environment that’s encouraging decision-making that conflicts with overall objectives.
Fortune magazine estimates 70% of CEO failures are caused not by flawed strategic thinking, but by failure to execute. Most management teams don’t fully appreciate the role of the organization in undermining performance. They lack time or resources to understand how the organizational models actually work. They’re frustrated by their inability to realize objectives, but they rarely identify the interacting assumptions and misaligned incentives built into their own structures as the root cause. Here’s where the research gets really interesting. Artis and Asplund’s work reveals that alignment isn’t a noun, it’s a verb. It happens through repeated behavior, not bold declarations. The temptation is to treat alignment as a messaging issue: clearer cascades, sharper narratives, better packaging. But alignment isn’t about communication tactics; it’s about leadership behavior. In today’s environment, the traditional playbook of strategy decks, town halls, and posters on the wall simply doesn’t work anymore. The challenge is how consistently leaders live and lead the strategy every day. That requires holding the tension between spread and clarity, decisiveness and dialogue, direction and dissent. It means slowing down when speed tempts shortcut thinking, inviting challenge when comfort suggests consensus, being consistent in action as well as intent, and most critically, checking for understanding, not just repeating messages.
There’s also the “shallow versus deep” alignment problem. Shallow alignment is tactical: agreeing on plans, checking boxes. Deep alignment is about the fundamental “why.” Organizations need both, but they often confuse one for the other. They think because everyone showed up to the strategy offsite and nodded along that they have alignment. Six months later, they’re baffled when nothing has changed. Artis’ research through the pandemic showed that organizations that thrived had articulated a strong sense of purpose and used it to guide decision-making. Airbnb’s Brian Chesky spoke about their purpose as their North Star, giving them permission to morph their business strategy in response to threats and opportunities. Yet a McKinsey study found that while 82% of companies affirm the importance of purpose, only 42% thought their purpose statements had any actual impact.
So what’s the way forward? Well, first we have to stop treating strategic alignment as a communication challenge to be solved. It’s an ongoing act of leadership that demands humility, curiosity, and deliberate behavior. The best leaders aren’t the ones who shout the strategy the loudest; they’re the ones who stay aligned when pressure hits, who listen when they assume, and who practice alignment as part of their everyday leadership. Second, communicators need to fundamentally shift their role. As Zora and Wayne’s research shows, communication professionals have an enormous, mostly untapped opportunity here, but only if they move from being seen as tacticians to being seen as strategic advisors. That means being the function that surfaces misalignments, the conflicting incentives, the information gaps, the unclear decision rights, and working with leadership to fix them. Third, leaders need to acknowledge that you can’t communicate your way out of structural problems, but you can use communication to identify them. Alignment requires examining the organizational environment, not just restating aspirations or exhorting people to do better, but actually changing the conditions under which people make decisions. The alignment paradox won’t be solved by better PowerPoints. It requires recognizing that leadership is a practice, not a position. And it requires understanding that the very things that make organizations functional at scale are the same things that make alignment extraordinarily difficult. The question for every leader is whether you’re willing to confront this paradox honestly or whether you’ll keep mistaking silence for consensus and proximity for clarity.
Neville Hobson: Yeah, that’s a very interesting analysis, Shel. I think Zora and Wayne have done some good work here from just reading Zora’s Substack post about this. A couple of things struck me from this, which I guess puts it in perspective for people perhaps who don’t work in large organizations, because this is clearly geared to that. And yet alignment doesn’t require it to be a large organization. And I say that because I’ve gone through an alignment myself as a sole person last year. I did a webinar for IABC for the consultants group on this exact topic. It’s kind of swapping balance—it’s not about balance; it’s about alignment. They’re two different things. I did like a couple of things that leapt at me from Zora’s post. She talks about: “Alignment doesn’t fail because leaders lack intent. It weakens when shared clarity, ownership, and accountability diverge and commitment isn’t strong enough to hold together. When that happens, effort increases, but traction declines.” That is to the point precisely about eight years later where nothing much has moved well. She also says, “Alignment demands humility,” and I think this is the bit that resonated most with me: “Vulnerability and sustained commitment. Requires leaders and their teams to slow down, invite challenge, and stay open to perspectives that complicate the narrative.” And that to me was the kind of bottom line of this whole argument: slow down. It’s examine things with better purpose than you have had before. Choose to do things if you can because they matter, not just because they’re available or because they’re going to make you a lot of money, although that’s hard in a large organization, I think.
I think it’s something each one of us needs to pay attention to, not just if you’re an employee in a large organization, to start your own shift—to look at what you are doing as a consultant or as a communicator in an organization and how aligned is it with your own values and those of your organization. I don’t think people do that properly—maybe “effectively” might be the better word than properly. So this is a valuable piece of research that has some great points to zone in on and consider in your job as a communicator, indeed, as you as an individual person. To me, the biggest one is velocity. Get rid of velocity; slow down. Take more time on things. Resist the temptation where velocity equals “busy.” Well, it doesn’t. Busy-ness may be not the same thing at all. Indeed, in Zora’s article, she quotes someone saying, “Ego, fear, hubris, and speed push in the opposite direction.” That relates to humility, vulnerability, and sustained commitment. And absolutely true. You see it in large organizations in particular. So there’s a lot to learn from this. It talks about “Why is this intensifying now?” Dynamics aren’t new; becoming more consequential, says Zora in her piece. Strategy cycles are shorter. The context leaders operate in is more complex. Decisions are made faster, with less time for shared sense-making. Misalignment is more likely, therefore, in which case, slow down. If you say it enough, you will slow down. Resist pressure to speed up even. Not always easy. It depends on many factors. If you’ve got a leader you’re working with who subscribes to this view of “it’s not about velocity, it’s about taking the time to consider things and discuss it with others, shared sense-making,” as the article says, it’s worth doing. So this is something that, like you, I would say I’d look forward to reading this report when it comes out in February.
Shel Holtz: One of the things that jumped out at me as I was researching this for the report was the whole idea of structure of the organization being a hindrance to alignment. And I don’t know how many organizations have ever undertaken a “structure audit.” And the structure was created in order to have work that is similar done in one place. It does create those silos of… I think leaders are confident that the structure that they have created is the right one for the organization, but have they tested that structure against other things that are important to them? And I don’t know if there’s such a thing as a structure audit. I’ve never heard those two words used together. It may be time to develop one and say, “Yes, we understand the structure works for our process of getting our product out, for example, but what does it do for these other four things that are priorities in the organization? Are they hindrances? And do we need, as a result of this, to make changes to our structure so these four other priorities gain more traction? Or do we need to rethink how we are implementing these priorities so that they will be effective given the structure that we want to maintain?” But I don’t think anybody’s thinking about that right now at all. And I think it’s something to be raised.
Neville Hobson: Yeah, indeed. I think the Substack article talks about that a bit, saying, “Alignment is a discipline for leaders and teams. Without commitment, it shows up in moments and disappears when it’s tested.” So yeah, plenty to pay attention to here, Shel, I think. So let’s talk about something I think is quite an interesting topic. One we’ve talked about before on For Immediate Release, but not for a few years probably. And this is all about Mark Zuckerberg, head of Meta as it was renamed some years ago from just Facebook, and his recent “U-turn,” as the media are describing it—what that means for the future of virtual reality. So for several years, Zuckerberg placed a bold bet on virtual reality and the metaverse as the next major computing platform. That vision reshaped the strategy and even the name of Meta as the company poured tens of billions of dollars into Reality Labs, launched VR headsets, and promoted immersive virtual worlds as the future of work, social connection, and everyday computing. That vision now appears to be undergoing a significant reset.
In January, multiple reports confirmed that Meta is making deep cuts to its Reality Labs division, laying off around 1,500 employees, roughly 10% of the unit. According to the Wall Street Journal, the move reflects a deliberate shift in investment away from the metaverse and towards AI, particularly AI-powered wearables such as smart glasses. Reality Labs has reportedly lost more than $77 billion since 2020. Eye-watering numbers here, Shel. And consumer-facing platforms like Horizon Worlds have struggled to attract sustained engagement. I’m sure Zuckerberg’s glad he’s got that clause in the contract that says they can’t fire him for any reason whatsoever. But coverage from Futurism is even more blunt than the Wall Street Journal. It’s framing the layoffs as a clear signal that Meta’s consumer metaverse ambitions are being wound down after years of underperformance. Entire VR game studios have been shuttered. And while some platforms remain active, they are doing so at a far smaller scale as capital and leadership attention pivot decisively towards AI.
A more nuanced perspective comes from The Conversation in an analysis by Per-Ola Kristensson, professor of interactive systems engineering at the University of Cambridge. He argues that this apparent U-turn does not mean immersive technology itself has failed. Instead, it reflects the limits of fully immersive virtual reality as a mass-market everyday computing platform. Drawing on years of academic research and user studies, Kristensson notes that while VR works well for specialist use cases—such as training surgeons, engineers, or pilots—it performs poorly as a general-purpose work environment. Extended use is associated with higher workload, lower perceived productivity, increased fatigue, anxiety, and usability problems. In short, VR can be impressive, but it is often too immersive, uncomfortable, and impractical for routine daily work. Crucially, The Conversation suggests that what we’re seeing is not the end of immersive computing, but a shift away from VR towards augmented and mixed reality—less immersive technologies that layer digital information onto the physical world rather than replacing it entirely. Products such as Microsoft’s HoloLens are cited as examples of this approach, where virtual information supports real-world tasks rather than pulling users into a separate virtual space. This distinction matters because much of the current retrenchment is about the consumer metaverse—the idea of mass adoption of shared virtual worlds for socializing, working, and entertainment. On that front, the hype has clearly run ahead of reality. By contrast, business and enterprise use of immersive technologies are not disappearing. Credible reporting and research continue to show steady, if unspectacular, adoption in areas such as training and simulation, product design, digital twins, remote maintenance, healthcare, and specialist education. In these contexts, immersive tools are judged by whether they improve safety, accuracy, learning, or cost efficiency, not by whether they attract millions of daily users.
In other words, what appears to be collapsing is a grand consumer vision of the metaverse, not the underlying technologies themselves. The center of gravity is shifting from spectacle to practicality, and increasingly towards combinations of AI, augmented reality, and task-specific immersive tools, rather than all-encompassing virtual worlds. Shel, I know you’re a fan and a user of VR headsets. Why don’t we look at this moment—what this moment really represents—whether Meta’s pullback marks the end of virtual reality as a serious platform or simply the end of a particular story about it. And what this tells us about how emerging technologies mature once the hype cycle collides with everyday reality.
Shel Holtz: Yeah, until I developed this back problem and I’m confined from doing a lot until it’s addressed, I was pretty much a daily user of VR with the Meta Quest 3. I use several apps and they’re all workout apps. I have to say that the only thing I’ve been using it for the last few years is working out. I don’t exclusively work out with the headset, but that’s always how I start. And I always start with the same app. It’s called Supernatural. Supernatural was an app that was in the Meta app store, but it was a separate company and Zuckerberg wanted it. He wanted these companies to be part of Meta so that he could showcase them as part of his effort in the Metaverse. Now Supernatural employed a bunch of people. It had what they call choreographers—these are the coders who create the workout routines so that they work right and they’re synced with the popular music that are in these workout sets and they’re in categories: it’s rap, it’s classic rock, it’s metal, it’s classical jazz, soul, R&B. And you would pick either a boxing or what they call a flow workout. And there are coaches—there are six coaches who would guide you through these, lead you through the warmups and the cool downs. And there’s a Facebook group for people who use the app; it has about a hundred thousand people using it. The estimate is that there are about a hundred and thirty thousand people who use the app, Supernatural, on, I think, a monthly basis—active monthly users.
And most of those 1,500 who were fired from Reality Labs at Meta were the coaches and the choreographers and the people who make Supernatural go. They’ve made the point that the app is going to stay and all the workouts that have been created up to this point—and there are, I think, thousands of them—will remain available. But I don’t know how long it’s going to stay because they’re going to have to renew the music licensing. This isn’t the music you hear on other workout apps from artists you’ve never heard of that isn’t requiring licensing through the big music licensing organizations. This is popular music. This is today’s top artists and the top artists of the classic rock era and the like. So it’s expensive to license that music. And when that rolls around, I don’t know if we’re going to continue to see this music available. And I think the whole thing is going to fall apart. There’s a tremendous effort among the users of this to get Zuckerberg to bring it back. There’s a petition and all kinds of other efforts going on. It’s all being discussed in the Facebook group. But what’s important to keep in mind is the 1,500 people who were cut are working for apps that either Meta created or acquired that are consumer-facing. There are still 15,000-plus people on the payroll there. So this is not an exit from the metaverse or the virtual reality world; this is a refinement of their approach.
It’s also important, I think, to consider the broader landscape because Meta is not the only one doing this. And by the way, you mentioned Horizon Worlds, their metaverse. It’s awful. You know, if people go in there and say, “Is this what the metaverse is?” Forget that. I mean, I can absolutely see that, but it’s not. It’s not the only effort out there. Apple is still refining the Vision Pro ecosystem to define this whole spatial computing space. Nvidia is doubling down on the industrial metaverse with their Omniverse platform—this is for digital twins for global manufacturing. Digital twins are going to be huge, and that’s definitely an element of the metaverse. Epic Games is building a massive persistent universe in partnership with Disney, which will probably be more appealing than Horizon Worlds. I mean, I’ve got to believe between Epic and Disney, you’re going to get something better than Meta was able to conceive. The pivot to AI isn’t a distraction or a move away from the metaverse and VR. It’s actually the fuel for it. Generative AI is finally solving the two biggest hurdles the metaverse faced: the massive cost of 3D content creation and the “empty world” problem. By using AI to populate and build these spaces instantly, Meta and its competitors are finally making the tech scalable. They’re not retreating; they’re just waiting for their AI tools to finish building the world they promised us. So I still remain bullish on the metaverse and virtual reality. The fact that it seems to be going through a decline right now is just a dip in the chart. I think you’re going to see that trend rise again. And I think AI is going to play a big part in this. And by the way, NPCs—non-player characters in video games—AI is going to be jet fuel for non-player characters. So I think: watch this space. I think it’ll probably be a few years. I remember Matthew Ball, who wrote the book on the metaverse, said we were 10 years away. That was what, three years ago? So, I mean, we’re still seven years in his timeframe. And because of what’s happening with AI, that may accelerate it, but I think it’s actually going to extend it to probably 12 or 13 years because the focus has shifted. But I think as these two factors, AI and the metaverse/VR converge, you’re going to see an explosion of this stuff down the road.
Neville Hobson: You could be right. I think there are other players, you’re right. For the time being, though, for the moment, it appears that Meta is ditching this to concentrate on the current thing that so many people are putting their focus on: AI generally and wearables is what they’re now going to pay attention to, according to these reports. I did like Kristensson’s analysis of it all, particularly his view that this doesn’t really work for business use and certainly not for consumer use without competitive technology that appeals to people and others who are doing it, such as you’ve outlined. The reality, though, is that they have announced these layoffs and their shifts that they’re making to their division, and they’re not supporting it anymore at the moment, according to all the media reports that I’ve seen about all of this. Doesn’t mean to say that couldn’t change—that may change—but that’s the picture right now. And I think the limited research that I’ve done on particularly business use of virtual reality has far more promise. And indeed, the way in which the mention of that was couched from one of the reports about this: “pretty unsexy stuff going on with business use of all this.” Yet there are excellent results being reported by a number of companies. I remember reading a few months ago, which I posted about, I think, on LinkedIn, what BMW is doing with its car-building metaverse, where they’re modeling new models in a metaverse where the guys on the production lines are increasingly robots, but they’re to be run by humans, and the designers and the marketers and others all get together in a virtual world to discuss planning of a new model. That’s definitely something that they’re seeing results from, that kind of thing. I remember, as you will, Shel, let’s go back into the deep mists of time to a place called Second Life. Hey, all the auto companies—all of them, literally the big ones, particularly the American ones—were there with the virtual cars. I’ve still got a virtual Pontiac somewhere up there on Second Life, which is still there probably in 2026. I haven’t logged in since, so I’m going to make a point of doing that this weekend to see what’s going on, to see if I need to upgrade my fashion, clothing, or whether it’s still valid. But that was the early stages of things that we now call a metaverse. And the tech has moved on significantly and Second Life has moved on significantly with improvements to their platform. But it is one platform that doesn’t appeal to everyone, yet it’s there still with thousands of users. So there is room for all of these things. And you mentioned the book and projecting 10 years—it might be longer than that. I think it might be quicker than that even, because things are moving so fast with all of this. It’s hard to tell, but it is worth paying attention to both from a communication and business perspective. But also if you’re interested in how this tech is moving along generally, keep an eye on this because I think we are likely to see AI playing a bigger role, such as you suggested, than has been the case to date. So it’s kind of: watch this space, basically.
Shel Holtz: Yeah, and in terms of the consumer use, one of the things that I was pointing out in a conversation I was having on the Supernatural Facebook group is that Meta in particular has done just a god-awful job of marketing these apps. When Supernatural was shut down, there was an article written by one of the users who’s also a Bloomberg reporter, so it got a fair amount of attention. He thought the move was fairly stupid to shut it down, given that it has a hundred thousand paying users. And I made the point in a discussion around this that, well, you know, they have not done a good job at all of marketing this. This is an app… I mean, you look at what people were saying when it was shut down in the group, and a lot of them were saying, “I never exercised before this. And I was skeptical when I tried it. But now here’s my before and after picture, right? And I weighed 250 pounds here and I’m 145 now.” There was a lot of that—people saying, “I never worked out before this and this is what led me to it.” And it’s because it’s fun and it’s because of the affinity we have with the coaches and blah, blah, blah. And Meta never took advantage of any of this. They never went out there and talked about how this can change your life. And there are other workout apps out there—Les Mills Body Combat and Fit XR and several others. So it’s not just a Meta problem. It’s the companies that make these, because these other apps are not owned by Meta; they’re just in the store. And there’s no marketing that I see for any of these that would bring people in. And you have to believe that somebody who’s never used this… and there are a number of people who said, “I bought my Quest headset so I could do Supernatural after a friend showed it to me.” That’s the gateway to other apps and to other tools and people finding, “Well, maybe there is some utility here. Maybe I do like playing VR games,” or what have you. And they just haven’t done this. And I have to say, it’s not surprising because Meta’s marketing has never been good for anything. But it seems to me that this whole virtual reality space, the marketing has been poor from the beginning.
Neville Hobson: Yeah. So let’s also throw into the pool here of memory lane stuff… I was a huge fan and a regular user literally on a daily basis of Microsoft’s Kinect—K-I-N-E-C-T—that I used totally and only for fitness: jogging in place, all the exercises, the works. And I was really unhappy when they canned it and got rid of it all and that whole ecosystem building around that. That must have been around 2009, 10, 11—that kind of timeframe. But my Kinect worked brilliantly on the TV I had, the Sony TV at the time. Absolutely super. I miss that because I’ve never really used any of this technology since then for exercise. Whereas that’s what I used it exclusively for—my Kinect. Running alongside my virtual trainer—I could see it on the screen, both of those. That was really cool. So things go on. But it is interesting though, Shel… you wonder why on earth did the company shut this thing down that was making tons of money, they had a community, etc. There are other forces at work here that lead to those kinds of decisions. And it may not make sense, but if you’re inside that organization, it probably does because there’s something else going on they haven’t announced publicly or whatever it might be. So hence my own view: I’m not as bullish as you are about this from a consumer point of view, any of this. Not yet, anyway. I think something’s got to work out further on this.
Shel Holtz: But my bullishness is along a long horizon. It’s not something imminent. Yeah.
Neville Hobson: No, I get it. I get it. And yet I wonder if we might see something happening. And I’m now thinking more of the other forces at work in the world generally about the changes that are going on in trust, stuff like that. What impact will this have? We’ve got you mentioned Nvidia doing some stuff. We’ve got other players, particularly in China, who are working with technologies that can do this kind of thing in China where they’ve got what—a billion people who could take advantage of all this. So there’s so much going on. Worth paying attention to all of it, I think.
Shel Holtz: Yeah, I’m looking forward to checking out this persistent universe that Epic Games and Disney are working on because you know that AI is going to factor into that and the ability to keep the world creating new places and wherever you turn, there’s going to be something new. It’s going to be good. I would put money on that. That’ll bring this episode of For Immediate Release to a close. Just a couple of quick notes before we go. First of all, later this week, we’re dropping our FIR Interview for January. It was a really good interview with Philippe Borremans, who we mentioned earlier. He left one of the comments that we read early in the show. Philippe specializes in crisis communication and we talked to him about crisis and AI. Really interesting interview. He’s got tremendous subject matter expertise. So if you deal with crisis communication, this is one that you don’t want to miss. In terms of today’s episode, we do hope that you will leave comments. Most of the comments we get are on our LinkedIn posts announcing the episode, and we’re grateful for you sharing your comments there. You can also email them to us at fircomments at gmail.com.
We would still love to get an audio comment one of these days. We used to get those all the time. They actually drove our discussion for much of the show. We haven’t had one in probably a couple of years, but you can actually record one right on the FIR website at firpodcastnetwork.com. There’s a link on the right-hand side—it says “send voicemail.” Just click that. You’ve got 90 seconds to get your message across. Record more than one, I’ll put them together. You can leave comments directly on the show notes on the FIR website. You can also leave comments in the Facebook FIR group or the FIR page or to either of our posts on Facebook or on BlueSky or on Threads because we share the release of each episode in all of those places. Also your ratings and reviews on Apple or wherever you get your podcasts are greatly appreciated. Our next episode will be next week. That’ll be a short midweek episode. We’ll continue to produce those, but our next long-form monthly episode will be released on Monday, February 23rd. Until then, that will be a 30 for this episode of For Immediate Release.
The post FIR #498: Can Business Be a Trust Broker in Today’s Insulated Society? appeared first on FIR Podcast Network.
The latest BCG AI Radar survey signals a definitive turning point: AI has graduated from a tech-driven experiment to a CEO-owned strategic mandate. As corporate investments double, a striking “confidence gap” is emerging between optimistic leaders in the corner office and the more skeptical teams tasked with implementation. With the rapid rise of Agentic AI — autonomous systems that execute complex workflows rather than just generating text — the focus is shifting from simple productivity gains to a total overhaul of culture and operating models. In this episode, Neville and Shel examine this evolution that places communicators at the center of a high-stakes transition as AI moves from a pilot phase into end-to-end organizational transformation.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, January 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 497 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. For the past couple of years, AI in organizations has mostly been talked about as a technology story—a set of tools to deploy, experiments to run, and efficiencies to unlock. It was often led by IT, digital, or data teams, with the CEO interested but not always directly involved. The latest AI Radar survey from BCG suggests that phase is now over.
For the third year running, BCG has surveyed senior executives across global markets—nearly 2,400 leaders in 16 markets, including more than 600 CEOs. The standout finding isn’t just how much money organizations are spending on AI, or even how optimistic leaders are about returns. It’s something more structural.
Nearly three-quarters of CEOs now say they are the main decision-maker on AI in their organization. That’s double the share from last year. This is not a minor shift; it’s a transfer of ownership. AI is no longer being treated as another digital initiative that can be delegated at arm’s length. CEOs recognize that AI cuts across strategy, operating models, culture, risk, governance, and talent. In other words, AI isn’t just changing what organizations do, it’s changing how they run. Half of the CEOs surveyed even believe their job stability depends on getting AI right.
We’re also seeing a striking “confidence gap.” CEOs are significantly more optimistic about AI’s ability to deliver returns than their executive colleagues. BCG describes this as “change distance.” People closest to the decisions feel more positive than those who have to live with the consequences.
The survey identifies three types of AI leadership: Followers (cautious and stuck in pilots), Pragmatists (the 70% majority moving with the market), and Trailblazers. Trailblazers treat AI as an end-to-end transformation and are already seeing gains. What’s accelerating this is the rise of Agentic AI. Unlike earlier tools, agents run multi-step workflows with limited human involvement. This raises the stakes for governance and accountability.
This is where communicators come in. If AI is now a CEO-led transformation, communication can’t just sit at the edges. It’s not just about writing rollout messages; it’s about helping leaders articulate why AI is being adopted and what it means for people’s roles and sense of agency. Is this the shift that turns ambition into transformation, or does CEO confidence risk becoming a blind spot?
Shel Holtz: Excellent analysis, Neville. I think there’s data in this report that is incredibly heartening. One of the characteristics of the “Pragmatist” CEOs—who represent 70% of the responses—is that they are spending an average of seven hours a week personally working with or learning about AI. I’ve never seen that before. When we introduced the web or social media, CEOs weren’t using it personally. This immersion is very helpful for the communicators who need to tell this story.
What’s troubling, though, is that 14-point confidence gap between CEOs and their managers. I don’t think this is just “resistance to change.” If the people implementing the systems are less confident than the person funding them, are we headed for an “AI winter” of unmet expectations?
Communicators need to become translators. Our job isn’t just selling the vision; it’s bridging a reality gap. If managers are skeptical, a CEO’s “rah-rah” AI speech will backfire. We have to translate that vision into operational safety for the staff while advising the CEO on the actual temperature of the workforce.
Neville Hobson: You’ve said that well. Communicators sit right at the center of whether AI transformation is trusted or resisted. This is a different picture than before. The senior communicator now has an unspoken challenge to assume a recognized leadership role to close that gap.
The appeal here is that you have a landscape ripe for a communicator to take the lead. You don’t have to sell the idea to the leadership—they already have the budget and the will. You can concentrate on persuasion and diplomacy to make sure the support is there throughout the organization. CEOs are going to need support on how to fulfill this aspect of their job.
It’s also interesting to note that confidence gaps are widening. The 2026 Edelman Trust Report also speaks to these issues regarding the relationship between people and organizations. The communicator is going to have to write a brand-new playbook.
Shel Holtz: Absolutely. And for the “Trailblazers,” the report suggests AI will lead to flatter, cross-functional organizational models. This puts middle managers at risk. If Agentic AI can plan, act, and learn multi-step workflows, what happens to the layer of management whose job is coordination and oversight? Is the CEO leading us toward a future where the “human middle” becomes redundant? How do you communicate with people who fear the technology will put them out of a job?
Neville Hobson: Unquestionably a challenge. Many CEOs recognize their own jobs are on the line, too. This isn’t petty cash; we are talking about massive investments. Communicators must help employees understand this shift in structure. It’s not a CIO-led digital transformation anymore; it’s a CEO-led business redesign.
Shel Holtz: To complicate things, 90% of companies say they will increase AI spending even if it doesn’t pay off in the next year. This is a “burn the boats” strategy. At what point does commitment become a sunk-cost fallacy?
Neville Hobson: To summarize, the main task for communicators is helping leaders articulate why AI is being adopted. We need to bring the human element in firmly as a foundational element. AI transformation will fail or succeed as much on meaning and legitimacy as on technology.
Shel Holtz: It’s an organizational change process. If the CEO owns it, they are the chief spokesperson and must articulate the vision while maintaining two-way communication. We also have to look at the strategic plan. If the direction of the industry is shifting, organizations may need to change their very aspirations and strategic goals, which requires considerable communication.
Neville Hobson: Fun times ahead, communicators.
Shel Holtz: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #497: CEOs Wrest Control of AI appeared first on FIR Podcast Network.
Neville and Shel dive into the ambitious new definition of public relations proposed by the Public Relations and Communications Association (PRCA). Sparked by a two-and-a-half-page draft that reframes the discipline as a senior strategic management function, Shel and Neville debate whether this comprehensive document serves as a vital “PR for PR” or if its length and academic tone move it closer to a manifesto than a practical, portable definition. The conversation explores the proposal’s emphasis on organizational legitimacy, its explicit inclusion of AI’s role in the information ecosystem, and the ongoing challenge of establishing a unified professional standard that resonates across the global communications industry.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, January 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson Welcome to For Immediate Release. This is episode 496. I’m Neville Hobson.
Shel Holtz And I’m Shel Holtz. Neville, how would you define public relations?
Neville Hobson The very short way I would define it—and this is a very old definition I seem to remember from the CIPR before it was called the CIPR—is the custodianship or the stewardship of the relationships between a brand or a company and its publics. That’s how I define it.
Shel Holtz I like it. PRSA defines it as a strategic communication process that builds mutually beneficial relationships between organizations and their publics.
Neville Hobson I could have said that, but I just wanted to give you the quick version.
Shel Holtz Yeah, well, that works. But now we have the Public Relations and Communications Association (PRCA) proposing a definition that positions public relations as a senior strategic management discipline focused on reputation, trust, legitimacy, and long-term value. In this framing, PR exists to help organizations and individuals navigate complexity, reduce uncertainty, manage risk, and build durable relationships with the people and institutions that affect their ability to operate and succeed.
It emphasizes two-way engagement, board-level counsel, data and insight, crisis preparedness, and societal impact. It explicitly extends PR’s remit into shaping the information ecosystem in an AI-driven world. Now, that’s a summary of the definition; the definition itself consumes two and a half pages of text. It’s available as a PDF and open to comment by PRCA members, according to the organization’s CEO, Sarah Waddington. In a LinkedIn post, she said the draft definition draws on academic research and a thematic analysis of recent sector commentary following her Radio 4 Today debate with Sir Martin Sorrell, which we talked about here a couple of weeks ago.
A two-and-a-half-page definition is a lot, and that’s kind of the point. The definition is designed for the environment in which many senior practitioners find themselves right now. The language of foresight, volatility, legitimacy, and uncertainty isn’t an accident; it’s meant to reflect how closely public relations work is increasingly tied to leadership decision-making. In that sense, this definition does something a lot of us have argued for over the years: it situates PR at the strategic heart of the organization rather than treating it as a delivery function.
It also aligns with a broader international view that PR is fundamentally about relationships and long-term organizational health, not about outputs like press releases or media placements. As you might expect, there have been reactions. Philippe Boromans, a former president of the International Public Relations Association and an upcoming guest on FIR Interviews, shared on LinkedIn that the definition reads less like a definition and more like a manifesto—ambitious and comprehensive, but maybe trying to do too much.
Historically, definitions that have endured tend to revolve around a single unifying idea. Think about the emphasis on mutually beneficial relationships in PRSA’s definition, which they adopted in 2012. That kind of conceptual anchor makes a definition portable—it’s easy to explain, teach, and remember. By contrast, the PRCA proposal advances a lot of important ideas all at once: trust, legitimacy, engagement, value creation, behavior change, and societal impact. These are all part of PR, but without a clear organizing principle, it’s hard to find something to hang your hat on.
There’s also the question of tone and accessibility. The language is unapologetically corporate and at times delves into the academic. That may resonate with board advisors and consultants, but definitions also serve students, people starting their careers, and those in the nonprofit or public sectors. A definition that primarily reflects the experience of the profession’s most senior tier risks narrowing its usefulness. One critique I find particularly important is the exclusive reliance on the concept of “stakeholders.”
Neville Hobson Yep.
Shel Holtz Public relations is always engaged with broader publics, too—communities, citizens, and audiences whose perceptions matter even when they don’t fit neatly into a stakeholder map. Leaning too heavily on stakeholder language nudges the discipline closer to management theory and further from its roots in public engagement.
And, of course, there’s the AI dimension. The definition explicitly calls out PR’s role in shaping the information ecosystem and ensuring organizations are represented accurately in AI-generated outputs. Some see this as an overdue recognition of how information now circulates, while others question whether embedding AI so directly risks dating the definition.
If you work in PR, you should read this proposal less as a final answer and more as an aspirational statement. As a description of what PR could be at its most strategic, it’s compelling. As a concise, durable definition, it may need sharpening and a cleaner central idea. Definitions are tools to help us explain our value and align practice across borders. This proposal doesn’t settle the challenge, but it moves the conversation forward. Neville, what do you think?
Neville Hobson I agree. I’m looking at the PDF now. I’ve not read the whole thing yet, so I will do that and likely write some comments. The first thing that grabs my attention is that it doesn’t explicitly state the author, though I assume it’s Sarah Waddington. It says a new definition is needed to reflect the modern operating environment and illustrate how integral the discipline is to success. In short, the industry needs better “PR for PR.” I agree with that 100%.
The 10-second definition I gave you earlier is woefully inadequate for today. It’s interesting looking at this document; it’s very standalone. Philippe Boromans mentioned in his blog post that it looks like it begs for more dialogue, and I agree. I don’t see it as complete at all.
Shel Holtz Sarah did invite members to comment on it. I think the consultation runs through the end of the month.
Neville Hobson She’s likely going to get comments from non-PRCA members as well since it’s on LinkedIn. Looking at the core principles she mentions—relationship-centered, not output-focused—that is very much in line with how conversations are shifting from inputs to outcomes. I remember about 15 years ago when PRSA led a charge to redefine PR in the US. It was picked up by practitioners here in the UK, there was a lot of dialogue, and then… nothing happened. Hopefully, this will be different.
I think she would be wiser to make this completely open, not just restricted to PRCA. The praise the PRCA will get is for taking the initiative. I’m wondering if they’ve engaged with other professional bodies to join them. It requires a lot of dialogue, and that’s the point of doing this. My only hang-up is the restriction to members. I’m not a PRCA member—I’m with IABC—but I support what they’re doing. As for her BBC interview with Martin Sorrell, it was clear he was talking utter rubbish, so it’s good to have these discussions.
Shel Holtz I certainly have nothing but praise for initiating the conversation. However, I agree that two and a half pages is not a definition; it is a manifesto. Imagine a two-and-a-half-page definition in a dictionary! I remember the Melbourne Mandate and the Venice Accords from the Global Alliance—those were more about purpose statements and AI positioning. I’m not sure all of that belongs in a definition, but as a spark for conversation, this is a good move.
Neville Hobson It’s too soon to see the full weight of public opinion on this, but we do need a new definition. I don’t see it as a manifesto, but it is incomplete. It would have benefited from an intro saying, “This is a first draft, we seek your feedback.”
Shel Holtz When I think of a definition, I want it to be something everyone can remember. You should be able to get the concept down and be 90% there with the wording. No one is going to memorize two and a half pages. This sounds more like the outline of a textbook.
Neville Hobson The CIPR website defines PR as “the planned and sustained effort to establish and maintain goodwill and mutual understanding between an organization and its publics.” That’s been around for decades. It adds to my feeling that we need something more effective. But a PRCA definition only works if the whole industry is singing from the same hymn sheet.
Shel Holtz I wonder if the PRCA is a member of the Global Alliance. That would be the place to adopt a definition so that all member associations embrace a consistent version. I’d also like to see the notion of “professional” public relations included, which is why I support certification—to signal that you are a professional and not just someone who says, “Well, everyone can communicate, so I can too.”
Neville Hobson That’s the rocky road no one wants to go down! We’ve been there so many times. People resist change. It needs someone to take a very strong lead to get this on the public agenda. It reinforces my view: excellent initiative by the PRCA, but it needs to be industry-wide, otherwise, we just end up with multiple conflicting definitions.
Shel Holtz Undoubtedly. Listeners, take a look at the proposed definition; we have a link to the PDF and Sarah’s post in the show notes. Let us know what you think. What would you change? We’ll share your views on an upcoming edition. And that will be a 30 for this episode of For Immediate Release.
The post FIR #496: A Proposed New Definition of Public Relations Sparks Debate appeared first on FIR Podcast Network.
In which Neville and Shel take a few minutes to acknowledge FIR’s 21st birthday.
The post FIR 21st Anniversary Celebration appeared first on FIR Podcast Network.
Reddit, the #2 social media site in the US, has surpassed TikTok to become the #4 site in the UK. It has no algorithm that forces you to see what’s most likely to keep you on the site; it just lets users upvote what they think is most interesting, valuable, or relevant. Every topic under the sun has a subreddit. Several organizations, from Starbucks to Uber, have taken advantage of it. So why is it absent from most communicators’ list of social media platforms to pay attention to? Neville and Shel look at Reddit’s growing influence in this episode.
Links from This Episode:
The next monthly, long-form episode of FIR will drop on Monday, January 26.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody, and welcome to episode number 495 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: I’m Neville Hobson and let’s start by saying we wish you a happy new 2026. We’re recording this in the first week of January, so it’s a new year. Last week the Guardian reported something that might surprise people who still think of Reddit as a noisy corner of the internet best avoided. In a deep analysis, the paper noted that Reddit has now overtaken TikTok to become the fourth most visited social media site in the UK, with three in five UK internet users encountering it regularly, according to Ofcom, the industry regulator. Among 18 to 24-year-olds—the Gen Z cohort—it’s one of the most visited organizations of any kind. And the UK is now Reddit’s second largest market globally, behind only the US.
That growth hasn’t happened because Reddit suddenly reinvented itself; it’s happened because the wider internet has changed around it. Google’s search algorithms now prioritize what it calls “helpful content,” particularly discussion forums. Reddit threads increasingly surface high in search results, and they’re also being cited heavily in AI-generated summaries. Reddit has licensing deals with both Google and OpenAI, which means its content is being used to train AI models and then redistributed back to users as part of search and discovery.
At the same time, users, particularly Gen Z, are actively seeking out human-generated content—not polished brand messaging or single definitive answers, but lived experience, contradiction, debate, and advice that feels like it comes from real people dealing with real situations like parenting, money, housing, health, and sport. Jen Wong, Reddit’s chief operating officer, described this as an “antidote to AI slop.” Reddit, she says, isn’t clean; it’s messy. You have to sift through different points of view, and increasingly, that is the point.
For communicators, this raises several important points. For a start, Reddit is no longer a niche platform you choose to engage with or ignore. It’s become part of the discovery layer of the internet. People may encounter your organization, your industry, or your issue there before they ever see your website or your carefully crafted statement. Search visibility is no longer just about content you own; it’s about conversations. Conversations at search engines and AI systems are now amplifying its scale.
Many organizations are still quietly hoping Reddit will remain hostile, chaotic, or irrelevant enough to ignore. That stance is becoming harder to justify when government departments are hosting AMAs (“Ask Me Anything”) and major public narratives are forming in plain sight. Finally, lurking is no longer neutral. Silence can allow perceptions—accurate or not—to solidify without challenge, context, or correction. So the question for communicators isn’t whether Reddit is for them, it’s whether they’re prepared for a world where human conversation, amplified by algorithms and AI, shapes reputation just as much as official messaging does. Look at the Omnicom layoffs announced not long before Christmas and the significant role Reddit played as a communication channel parallel to official company communication. We discussed this in depth in FIR 492 just a few weeks ago.
So, Shel, this feels like another signal that the ground is shifting under communicators’ feet. Where would you start unpacking what this means?
Shel Holtz: Well, if the ground is shifting, it’s because communicators weren’t standing in the right place in the first place. Reddit has been a significant and important platform for a long time. I’ve been advocating for communicators to start taking advantage of it for many years. I’m glad to see it getting this kind of attention, and there are a lot of reasons to consider using this in multiple ways—including the fact that AI is now relying on Reddit for some of the content that it’s trained on.
Let’s look at just a couple of things about Reddit. First of all, the people on Reddit are very committed to the communities that they are part of. This is not a “drop-in” community like we see on LinkedIn, nor is it tight, insular communities like you see on Facebook. These welcome new people, but they’re looking for people who are very committed to engaging, sharing, and contributing. Second, there’s no algorithm driving what rises to the top. It’s the community that upvotes the most valuable posts. That’s why you see the most valuable information at the top of any thread. It’s why in the early days, BuzzFeed relied on Reddit to determine what content it was going to publish. Reddit had the nickname “the front page of the internet,” and how you can ignore that eludes me.
If you look at what happened with Omnicom, that’s just one thing it’s useful for: social listening and insight generation. It is also issues management and crisis communication. If these large communities are talking about your industry, company, or product, and you’re not listening, you’re missing what is being discussed more broadly via “sneakernet”—people just talking to each other voice-to-voice or over instant messages where you can’t hear it. This is where you gather that intelligence to help you come up with the next product iteration or address issues important to your stakeholder base.
I use Reddit basically two ways. One, whenever I have a problem with a product, like my Nikon Z6 II camera, there is a community there more than happy to answer my question. While I’m there, I’ll scroll through and see if there’s something I can contribute, because it’s important to give as well as take. The other is monitoring construction subreddits for good intelligence that I can share up in the organization. There are so many other ways to take advantage of Reddit, and now is the time to invest.
Neville Hobson: Yeah, I’ve been on Reddit for about 10 years with an account. In those early days, it was very much a geeky place—not really mainstream. But reality, as the Guardian’s analysis outlines, is that you can’t just treat it like that anymore if you’re wearing a business hat. It is showing up in places like Google AI overviews and is heavily surfaced in those search results because of the licensing deal that allows Google to train models on Reddit data.
The UK government is active on Reddit, with departments hosting “Ask Me Anythings” to engage with people. That sort of activity is probably more appropriate for Reddit than LinkedIn, where I’ve seen government activities attract nothing but extreme, politically motivated negativity in the comments. On Reddit, you’re probably going to get a more balanced view.
The Omnicom example was really intriguing. The depth of comment on Reddit told lived experience stories that contrasted sharply with the formal communication from corporate communicators. It was a subject lesson in how not to do this from a corporate point of view. Ignoring it is not an option anymore.
Shel Holtz: You mentioned “Ask Me Anythings.” That is a great opportunity to present your CEO or subject matter experts to build reputation proactively or reactively during a crisis. Siemens did an AMA featuring their engineers and reported strong click-through rates. Novo Nordisk leaned into sensitive topics and reported an “astoundingly positive reception”. Oatly and IBM also reported strong engagement and brand lift through this format. Of course, there are disasters if executives are not well-prepared, as authenticity is highly valued.
Community engagement is another missed opportunity. Wayfair uses discovery tools on Reddit to surface conversations about their service and pops in to answer questions and address issues. You can build relationships with customers, enthusiasts, and even critics. You can also use it for your employer brand to monitor interview processes and culture signals. The CEO of Starbucks explicitly treated a Reddit hiring thread as a signal that a culture shift was taking hold.
Neville Hobson: I think one reason for past failures is that companies brought their old methods of communicating to a place where that just doesn’t work. The Guardian findings show that human experience now outranks polish. If you come to Reddit with all your corporate baggage and structured messaging, it’s not going to work. Users are actively seeking “signals of humanity,” and messiness is becoming a trust cue. It’s an “anti-automation” movement. Lurking is no longer neutral because you are being talked about whether you are present or not.
Shel Holtz: There’s an illusion of control that you get from things like press releases, but get over it—you don’t control the conversation. To be credible in these spaces, you have to stop being polished. “Press release voice” is a trigger on Reddit; plain talk is valued. Make sure you have the right subject matter expert in the right subreddits who can talk in a plain voice. Don’t just do “drive-by” communication when you need something; be a regular contributor.
Neville Hobson: So, human experience-led communications are regaining strategic value. You can’t ignore this.
Shel Holtz: LinkedIn’s value seems to be diminishing as it turns into a combination of Facebook with non-business content and AI-generated posts. If you’re looking for a community to tap into people who care about what you do, Reddit is the best place. You can even use paid amplification—Uber and Oreo have reported brand lift from boosted posts. Don’t dismiss it as hostile; develop a strategy and start doing it.
Neville Hobson: Keep an eye on the resurgence of other networks, too. The new “Digg” is coming, which was a fixture like Reddit in the early days. There is also “Tangle,” a new one from one of the Twitter founders focused on genuine conversation.
Shel Holtz: I’d keep an eye on them, but Reddit already exists with millions of users and tens of thousands of subreddits. Use it. Don’t ignore it. And that’ll be a “30” for this episode of For Immediate Release.
The post FIR #495: Reddit, AI, and the New Rules of Communication appeared first on FIR Podcast Network.