In addition to news items and in-depth discussion of trends and issues, you'll hear the Internet Society's Dan York report on technologies of interest to communicators and Singapore-based professor Michael Netzley explore communications in Asia.
For somebody who posts on X or other social media platforms to become recognized by the media and other offline institutions as a significant, influential voice worth quoting, it usually takes patience and hard work to build an audience that respects and identifies with them. There is another way to achieve the same kind of reputation with far less work. According to a research report from the Network Contagion Research Institute, American political influencer Nick Fuentes opted for the second approach, a collection of tactics that made it appear like a huge number of people were amplifying his tweets within half an hour of posting them. While Fuentes wields his influence in the political realm, the tactics he employed are portable and available to people looking for the same quick solution in the business world. In this short midweek episode, we’ll break down the steps involved and the warning signs communicators should be on the alert for.
Links from this episode:
Raw Transcript:
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 493. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and today I’m going to wade deep into America’s culture and political wars. I swear to you, I’m not doing this because of any political or social agenda on my part. What I’m going to share with you is not a social or political problem, it’s an influence problem. And in communications, influence and influencers have become top of mind.
We’re going to look at the rise of Nick Fuentes’s significance on the social and political stage. For listeners outside the US, you may not know who Fuentes is. He’s a US-based online political influencer and live stream personality who’s built a following around the “America First” ecosystem and has sought influence within right-of-center audiences, including by positioning himself in opposition to mainstream conservative organizations like Turning Point USA and encouraging supporters to disrupt their events. Tucker Carlson has had him on his show as a guest. President Donald Trump has hosted him at the White House for a dinner.
In a recent report that our friend Eric Schwartzman highlighted on LinkedIn—that’s how I found it—the Network Contagion Research Institute (NCRI) asserts that Fuentes is a fringe figure whose public profile rose to a level of significance by manipulating online systems. The NCRI, by the way, is an advocacy group focusing on hate groups, disinformation, misinformation, and speech across social media platforms. It’s been around since, I think, 2008. And they’ve taken their own fair share of criticism for bias, but this report looked pretty well researched, and there will be a link to it in the show notes.
The techniques that Fuentes used to rise to significance are, and this is the key here: If bad actors can inflate the perceived importance of a fringe political figure, the same mechanics can inflate the perceived importance of a product, a brand, a CEO, a labor dispute, or a crisis narrative.
I’ll share the details right after this.
In modern media ecosystems, visibility is often treated as evidence of significance. Of course, when the system can be tricked into manufacturing visibility, it can be tricked into manufacturing significance. Here’s the playbook. The report focuses heavily on what happens immediately after a post is published, specifically the first 30 minutes. That window matters because platforms like X use early engagement as a signal of relevance. If a post seems to be spreading fast, the algorithm acts like a town crier, showing it to more people.
The researchers compared 20 recent posts from several online figures. Their finding was that Fuentes’s posts regularly generated unusually high retweet velocity in the first 30 minutes, enough to outpace accounts with vastly larger follower bases. It outpaced the account of Elon Musk, for example.
The key detail here isn’t just the volume of retweets, it’s the timing. Rapid, concentrated engagement right after posting creates the illusion that the content is taking off, kicking it into recommendation streams. This is the same basic mechanic behind launch day boosting. You’ve seen this for people who have a new book out and they go out to friends and ask them to boost that new book the day it’s released. If you can create the appearance of immediate traction, you can trigger algorithm distribution that you didn’t earn.
In commerce, this shows up as engagement pods, coordinated employee advocacy swarms, and community groups that behave like a click farm. If your measurement system rewards velocity, someone can and will manufacture velocity.
So who’s responsible for those early retweet bursts? Across the 20 posts studied, 61% of Fuentes’s early tweets came from accounts that repeatedly retweeted multiple posts in the same window. In other words, this wasn’t a crowd. It was a repeatable mechanism, the same actors over and over, hitting the algorithm where it’s most sensitive. In business, you don’t need millions of genuine fans to create the signal of traction. You need a reliable, repeatable set of accounts that behave predictably at the right moment. This is why a relatively small number of coordinated actors can distort what public response appears to be, especially early in a narrative when journalists and internal leaders are trying to interpret what’s happening.
The report describes the amplification network as dominated by accounts that aren’t meaningfully identity-bearing. Among the repeat early retweeters, 92% were anonymous. Furthermore, many of these accounts were essentially single-purpose. They existed solely to boost specific messaging. Now, anonymity is a feature, not a bug in manufactured influence. In a corporate context, we see this as sock puppet commenters flooding a CEO’s LinkedIn post with applause or fake grassroots accounts inflating outrage against a policy change. If you’ve ever seen a comment section where the voices feel oddly similar and oddly committed, you’ve seen the symptom.
Perhaps the most operationally important finding involves outsourced capacity. Before a major inflection point in September, about half of the retweets on Fuentes’s most viral posts came from foreign, non-U.S. accounts. The report highlights concentrations in countries like India, Pakistan, Nigeria, Malaysia, and Indonesia. There’s no organic reason for these regions to be driving a U.S.-centric fringe political account. These geographies match known patterns associated with low-cost engagement farms.
If you’ve ever dealt with fake reviews or fake webinar attendees, you understand the market for outsourced attention. It’s snake oil. The same infrastructure used to inflate a political persona can inflate a brand narrative, especially when the goal is to trigger secondary effects like investor interest or the internal belief that everyone’s talking about this.
In the report, Fuentes isn’t presented as a passive beneficiary of an algorithm. The report states that he repeatedly issues direct instructions to followers: “Retweet this. Everybody retweet.” Turning amplification into a synchronized act. If you run employee advocacy programs or franchise networks, you’re already sitting on “raid capability.” The ethical version is mobilizing real stakeholders transparently. The unethical version is instructing coordinated networks to simulate stakeholder response specifically to game recommendation systems.
This is where communicators need to be brutally honest. The distance between campaign mobilization and manufactured consensus can be uncomfortably short.
Fuentes’s final move is the flywheel. Once you’ve manufactured signals that look like relevance, institutions treat those signals as real. The report argues that mainstream media coverage increased sharply after major news shocks, while the persistent manufactured engagement helped keep the subject elevated between those shocks. It also reports a 60% increase in high-status framing of the subject in mainstream articles after that inflection point.
This is classic social proof laundering. Once a narrative appears prominent on-platform, it becomes easier to place it off-platform: press mentions, analyst notes, investor chatter. At that point, people stop asking, “Is this real?” And start asking, “How big is this?”
For business communicators, here are three practical takeaways.
First, treat attention as an attack surface. If a narrative is unusually fast, unusually concentrated, or driven by accounts that don’t look like real stakeholders, assume you’re looking at influence operations.
Second, build signal hygiene into your intelligence process. If your team reports on social activity, incorporate basic credibility checks, like repeat actors, anonymity patterns, and geographic anomalies.
And third, audit your own incentives. If your organization celebrates reach metrics without interrogating provenance, you’re teaching everyone—agencies, vendors, and bad actors—that synthetic engagement is rewarded.
This isn’t just a problem that’s “out there.” The PR and marketing industries have plenty of muscle memory around manufacturing perception. The difference is whether we keep that muscle under ethical control or let the algorithm decide what we’re willing to do. Just because you can manufacture influence doesn’t mean you should.
Neville Hobson: That’s quite a story, Shel. I’m wondering how many people in our profession truly understand how this actually works. Your call to action, as it were, was to pay attention to this and pay attention to that. But I think people need to understand why and the deeper picture surrounding it.
So, for instance, the report—some of which you summarized in your narrative—struck me. And indeed, from the summary I asked ChatGPT to create (that saved me reading the whole damn thing), it was very helpful. According to the report, the researchers said that Fuentes consistently generates extraordinary engagement in the first 30 minutes after posting on X. Early retweet velocity outperforms accounts with 10 to 100 times more followers than he’s got. You mentioned Elon Musk; he’s one of them. When normalized by follower count, his engagement is orders of magnitude higher than comparative political influencers.
Why does this matter? This to me is significant to try and get a handle on this. Platform algorithms heavily weight early velocity as a sign of relevance. So once triggered, content is promoted regardless of whether engagement is authentic. Speed, not scale, is a manipulation lever. This is a critical insight for communicators. Algorithms cannot distinguish motivation, only momentum.
So when people talk about—as they do, and I remember using this 10 years ago as a sign that something is working—”Look at how this thing’s taken off!” This is seriously significant: understanding how this works.
Another part of that is, as you mentioned, the foreign origin engagement—the synthetic catalyst, if you like. Half the retweets on Fuentes’s most viral posts came from non-US accounts, and you ran off a list of countries that are the prime originators of large volume. It says there is no plausible ideological or cultural reason for these regions to be organically amplifying a US-centric white nationalist figure. Makes sense, doesn’t it? So why does that matter? Well, these geographies closely match known low-cost engagement farm infrastructures. So foreign engagement appears to act as a spark, creating the illusion of virality.
And it uses phrases that most people won’t know about—I’m only just getting familiar with it myself—like classic “signal laundering.” You’ve heard of money laundering, right? But now signal laundering. It highlights this coordinated amplification, which is not spontaneous engagement. It’s not enthusiasm spreading naturally, it’s coordination masquerading as popularity.
So I think all of us, as communicators trying to grasp something like this to understand the significance of it, are going to have to spend a little extra time understanding how it all works.
There’s one element that came out that I thought, “Wow, yes, you see this.” I can think of two people I follow on LinkedIn who do this. Illustrating Fuentes in this example is not a passive beneficiary; he actively runs it. The evidence includes hundreds of documented instances where he issues real-time commands on live streams like “Retweet this,” “Everyone retweet,” “Quote tweet it now.” I see people doing that even on LinkedIn. There’s one individual I’m not going to mention—because it wouldn’t be right to do that in this way—who has got thousands and thousands of followers. I was looking back through some of his recent posts and they are full of stuff like that. His email newsletter is nothing but that, actually. These directives align precisely with the early velocity spikes observed in the data, according to the report.
Interestingly, X’s own policies say that this behavior qualifies as coordinated inauthentic activity, platform manipulation, and spam amplification, yet the activity persists on X. So to me, the question for everyone listening is: surely you cannot trust a platform like X with your brand messaging, right? So why are you still there in that case?
It means loads more we could dissect in that context, but I think it’s necessary for people to truly understand how this works before you can understand what to do about it.
Shel Holtz: Yeah, and you asked how many people in our business might actually understand this. I think if you look at a department like mine where there’s two of us and we’re mostly focused on internal communications, this doesn’t hit our radar. But if you’re a marketing agency and you are tasked with elevating a brand, you got to figure that if a 25-year-old white nationalist fringe character on the social-political scene can figure this out, the people running digital media for a mid-sized agency can easily figure this out.
I suspect there are probably YouTube videos telling you how to do this. You sign up with one of those farms in one of those countries that has the instruction to amplify every time you tweet, and you’re off to the races. And as you mentioned from the report, the algorithm can’t really tell the difference.
Now, this is something that I think is in large part on the platforms—whether it’s X or any of the others—to improve their processes so they can identify and block this sort of thing. The idea that you can start to get media coverage, that people will start including you in their reporting because you appear significant as a result of this blatant manipulation—when you really wield no influence, when the people retweeting you have accounts that have been set up just to retweet you—that’s on them, I think. But they’re clearly not doing anything about it. Musk wouldn’t do anything about it. I wouldn’t expect him to. Zuckerberg’s not going to do anything about it. I wouldn’t expect him to. I wish he would, but knowing what I know about these people, I wouldn’t expect them to spend time and money becoming more ethical. It’s just not in their DNA.
So it’s on us. And where I can see this being used in the business context most blatantly is by advocacy groups when an organization is having a crisis. Because who speaks first is the one who gets the traction. Everything else is reacting and responding to that. And if you could get that kind of momentum, that kind of velocity, that kind of visibility for your point of view in opposition to the perspective of the organization experiencing the crisis, then you’re going to win in that crisis. It’s going to be very difficult for the organization, even employing the best digital crisis communication practices, to overcome that kind of a process.
So this is why I think we need to be aware of this. From my perspective, I have my own personal views about Fuentes and the fact that he’s doing this, but that’s not what this is about. This is about the fact that if Fuentes can do it, your opposition can. It might be, let’s say, a union if you’re a non-union company and they’re trying to get a foot in the door. It could be a competitor trying to make you look bad and elevate their own organization as an investment or as a provider of goods or services. All of them can take advantage of this process because it’s possible.
And frankly, once you dig into it, while it seems complicated, it really isn’t. It’s just subscribing to these services, getting everything set up, and then you just start tweeting or posting on LinkedIn or wherever it is, and everything just follows.
Neville Hobson:: Yeah, I think I mentioned LinkedIn the way I did, but X is the serious negative platform, right? But I would imagine most other platforms that are used for business purposes are subject to this manipulation. And it makes you think you need to know more about the places that you spend time and populate and share information about your business.
The report goes into—or rather the interpretation I’ve made certainly—implications for communicators and organizations, or the key takeaways, I suppose, to summarize it all. I mean, you’re right, the report is long, and it would benefit from a simplified executive summary. Maybe what we’ve prepared might help people get a better handle on what to look at.
But some of the interesting things that summarize it: “Algorithms amplify speed, not authenticity.” And that’s what I think most people—and I’ve been guilty of this too—where speed is really the important thing here. The velocity of your message getting out there and going viral, as people still use that term, is what it’s all about. Absolutely, that’s not what it’s all about. And in this particular age we’re in now with artificial intelligence, I’m arguing very strongly that it is not about speed at all. It’s about being in the right place at the right time with the right message, not necessarily being the first or the fastest with that message.
Another point: “Anonymous and foreign networks can manufacture legitimacy.” How do you figure that out? Interestingly, and I agree with this very much so, “Mainstream media mistakes visibility for importance.” Absolutely true in my view. So all these tactics are portable.
And the final point, I suppose—there’s like 20 more I’ve got for now anyway—the real issue is not who used the playbook to do this. It’s how easy the playbook is to use. I think it’s absolutely right. And I think many people would succumb to kind of increased pressure to play the game because that’s what everyone else seems to be doing. But also it throws up, I think, a bigger concern. It’s become harder to measure engagement if what you’re measuring is suspect.
So that adds some big questions on how are you going to proceed from this point on. So:
What signals do you treat as evidence of relevance?
How easily could those signals be fabricated?
Are we rewarding momentum over substance? You need to know the difference.
And where does responsibility sit? Platform, media, or practitioner? Or all of the above?
Those are four questions—there’s probably lots more—but that might not be a bad starting point.
Shel Holtz: I don’t think it would. And I think the more practitioners who become aware of this, those that abide by an ethical code, need to raise their voices because I think the more pressure there is on the platforms, the more they will look to change the infrastructure to address this. If nobody complains or if it’s just people on the fringe like us, then nothing’s going to change.
And you’re right, Fuentes started all of this before the AI revolution. And AI is just going to make this worse with the ability to create those posts that get amplified because you have manipulated the system the way Fuentes has. So I’d like to see people kind of raise their voices. Maybe professional associations need to start advocating on behalf of fixing this. You know, AI has led a lot of people to talk about authenticity more than we already were, and we already were a lot. And if authenticity matters, then I really do think we need to raise our voices and demand change from the platforms so that people can’t do this.
Neville Hobson: I agree.
Shel Holtz: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #493: How to (Unethically) Manufacture Significance and Influence appeared first on FIR Podcast Network.
In this short midweek episode, Shel and Neville dissect the communication fallout from the $13.5 billion Omnicom-IPG merger and the controversial pre-holiday layoff of 4,000 employees. Among the themes they discuss: the stark contrast between the polished corporate narrative aimed at investors and the raw, real-time reality shared by staff on LinkedIn and Reddit, illustrating how organizations have lost control of the narrative. Against the backdrop of a corporate surge in hiring “storytellers,” Neville and Shel discuss the irony of failing to empower the workforce — the brand’s most authentic narrators — and analyze the long-term reputational damage caused by tone-deaf leadership during a crisis.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz Hi everybody and welcome to episode number 492 of For Immediate Release. I’m Shel Holtz.
Neville Hobson And I’m Neville Hobson. In this episode, we’re going to talk about something that’s been playing out very publicly over the past few weeks in our own industry, i.e. communication. It’s about Omnicom, its merger with IPG, and the layoffs that followed. Following confirmation of the $13.5 billion merger, the company announced that around 4,000 roles would be cut, with many of those job losses happening before Christmas.
On the face of it, this is not unusual. Mergers of this scale inevitably create overlap, and redundancies are part of that reality. What makes this different was not simply the decision, but how the story unfolded and where.
On one level, there was the official corporate narrative. Omnicom’s public messaging focused on growth, integration, and future capability. It was language clearly written with investors, analysts, and the financial press in mind—not to mention clients. Polished, strategic, and familiar to anyone who has worked around holding companies. At the same time, a very different narrative was emerging elsewhere, particularly on LinkedIn and Reddit, driven by people inside the organization—people who had lost their jobs and people watching colleagues lose theirs.
That contrast became the focus of an Ad Age opinion piece by Elizabeth Rosenberg, a communications advisor who had handled large-scale change and layoffs herself. In the piece—which, by the way, Ad Age unlocked so it’s openly available—and later in her own LinkedIn posts, Rosenberg described watching two stories unfold in real time. One told to shareholders and external stakeholders, the other taking shape in comment threads written by the people most directly affected. Her point was not that Omnicom failed to communicate, but that it chose who to communicate to.
That observation resonated widely inside the industry. Rosenberg’s LinkedIn post made clear that she was less interested in being provocative than in naming something that many people were already seeing and feeling. She also noted the response she received privately—messages describing her comments as brave—and questioned what it says about our profession if plain speaking about human impact is now treated as courage.
As that conversation gathered momentum, another LinkedIn post took the discussion in a slightly different direction. Stephanie Brown, a marketing career coach, wrote about the timing of the layoffs. Her post was grounded in personal experience; she describes being laid off herself in December 2013 and what it meant to lose a job during a period associated with family, financial pressure, and emotional strain.
She acknowledged that layoffs are part of corporate life but argued that timing is a choice and that announcing thousands of job losses immediately after Thanksgiving, with cuts landing for Christmas, intensified the impact. That post triggered a large and emotionally charged response—thousands of reactions, hundreds of comments. Some people echoed Brown’s argument that holiday season layoffs carry an additional human cost. Others pushed back, arguing that earlier notice can be preferable to delayed disclosure even if the timing is painful.
What stood out was not consensus, but the depth of feeling and the willingness of people to share lived experience publicly. Across both posts and in the comment threads beneath them, a broader picture began to emerge. Former Omnicom and IPG employees described how they received the news. Industry veterans expressed sadness rather than surprise. Practitioners questioned what this says about internal credibility, culture, and leadership. Others pointed out that holding company economics have long prioritized shareholders and that this moment simply made that reality visible.
What’s notable here is that LinkedIn wasn’t just a reaction channel. It became the place where the story itself evolved. The press release was no longer the primary narrative. The commentary, the responses, and the shared experiences became part of how the situation was understood. So that’s the landscape we’re stepping into today: A major communication holding company announcing significant layoffs via a formal, investor-focused message, and a parallel, highly visible conversation driven by employees, former employees, and industry peers about audience, timing, and impact.
Rather than rushing to judgment, I think this is worth exploring carefully, especially for people whose job is communication, reputation, and trust. So, Shel, what would you say to all of this?
Shel Holtz I would say, first of all, that for an organization that purports to be a communication organization, their failure to recognize that they employ thousands of communicators who know how to use publicly accessible channels is a massive failure in communication planning. It should have been anticipated. But the story is dripping with irony, Neville. In light of an article the Wall Street Journal published last week, the article pointed to an entirely different approach that companies are taking than the one Omnicom defaulted to.
While Omnicom is watching its narrative get dismantled by its own employees on Reddit, the Wall Street Journal just reported that the hottest job in corporate America is—are you ready for this?—”storyteller.” Listings for jobs with storyteller in the title have doubled on LinkedIn in the past year. Executives used the word “storytelling” 469 times on earnings calls through mid-December.
Companies like Microsoft, Vanta, and USAA aren’t just hiring communicators anymore; they’re hunting for directors of storytelling and heads of narrative. Now, on one level, you can see why they’re doing this. The Journal points out that print newspaper circulation has dropped 70% since 2005. The army of journalists we used to rely on to tell our stories has evaporated. If companies want their news covered, they realize they have to become the media themselves. That’s what Tom Foremski said so many years ago: Every company is a media company.
But what this really means is that their traditional gatekeepers are gone. Listening to what’s happening with Omnicom, you have to wonder if these companies actually understand what storytelling means in 2025. We’re seeing a collision of two worlds here. In one world, you have the C-suite still believing they can control the narrative by hiring better writers. They think if they can just recruit a customer storytelling manager—that’s what Google is doing—or a former journalist to run corporate editorial—that’s what Chime is doing—they can fill the void. They think they can craft a sanitized, strategic message for investors and that will be the story of record.
Then you have the real world, Neville; it’s the one you just described. While Omnicom was probably busy polishing its official investor-focused story, the actual story was being written in real time on Reddit and LinkedIn by the people living through the chaos. These employees didn’t need a head of storytelling. They didn’t need a corporate newsroom. They had the truth. They had a platform.
This is exactly the loss of control we’ve been warning about for how many years. The Journal quotes a communication CEO who says leaders are finally realizing that brands that are winning right now are the ones that are most authentic and human. Yeah, he’s absolutely right. But here’s the problem: You can’t hire authenticity. If your new director of storytelling is busy writing a glossy piece about innovation while your employees are on social forums describing a culture of fear and disposal, you’ve lost the plot. The story isn’t what you publish on your corporate blog. The story is what your people say it is.
The Journal notes that a USAA storyteller might work some real experiences into an executive speech. Yeah, that’s fine. It’s also table stakes. If Omnicom or any of these companies rushing to hire storytellers want to tell a better story, they don’t just need to hire better writers. They need to give their employees a better story to tell. That’s the idea behind employee advocacy, after all, isn’t it? Because if the story you pay someone to write conflicts with the story your employees are living, the employees are going to win every single time. And as we’re seeing with Omnicom, they’re going to do it on their own channels and they’re going to do it without anybody’s approval.
Neville Hobson Yeah, one of the ironies that came across in the story, according to both of the women I quoted from the LinkedIn posts, is that Omnicom and IPG have spent decades advising clients on authentic communication, yet failed to apply that themselves. Rosenberg highlights comments from laid-off staff describing abrupt, impersonal Zoom calls, minimal explanation of rationale or future direction, and leadership absence at critical moments. These voices carried more weight than any press release because employees are the brand’s most credible storytellers.
Switch over to the Town Hall in early December, which Omnicom hosted—the first global company-wide Town Hall since the merger. It was actually completed at the end of November. The behavior of the CEO led me to think just reading this—is he tone-deaf or does he just not care?
One quote in Storyboard 18 says: “Opening the session, Florian Adamski, the CEO of Omnicom Media, reportedly addressed intense industry speculation surrounding the merger and restructuring. He criticized the tone of press and social media commentary, describing detractors as ‘haters’ and stressed that decisions have been taken after considerable deliberation, urging staff to stay patient as transitions rolled out.“
It goes on elsewhere to repeat that call from the leadership of Omnicom to be patient, everyone, it’s all going to be fine. But without any communication explaining how—or worse, even addressing the detail of what people have been saying about this. Is that tone-deaf or what?
Shel Holtz It is seriously tone-deaf. I remember years ago—this was at a Ragan conference in Chicago—a CEO was speaking. I think he was the CEO of Avon. He made the point that he thinks the minute a CEO is installed in that role and sits in the chair, there is a “stupid ray” aimed at them that affects their brains and makes them forget who employees are.
He made a point at least once a month of visiting frontline employees. It could be at a manufacturing facility where they were filling bottles, but he talked to them to remind himself that these are real people, that they have real lives, and that they are smarter than you tend to give them credit for when you don’t interact with them. You’re the CEO, you’re part of the executive team, and you think those are the “little people” down there doing all the work, not smart enough to absorb bad news.
In speaking to them, he found that they were scout masters, they helped their spouses run businesses, they were the president of the local Kiwanis club. They are smart, they can handle bad news, and they can understand things like business plans and corporate strategy. I think in this case, the Omnicom CEO obviously has not moved himself out of the path of that “stupid ray,” because his assessment of employees and the role they could play in this was seriously misguided.
Neville Hobson Yeah, your mention of that phrase “the little people” reminded me of that hotel owner in New York who went to jail for not paying taxes because she said “only the little people pay taxes.“
Shel Holtz That was Leona Helmsley.
Neville Hobson That’s it. So, one thing I also thought when I was thinking about this story: The optics are bad, but this isn’t about the optics. It’s about trust.
To me, this happened. 4,000 people are losing their jobs right before Christmas. It’s going to be extremely painful to many of them. They feel angry. The deeper risk is the long-term erosion of trust in Omnicom. Employees disengage or leave faster than those who are still there. Leadership messages lose credibility. Organizational resilience weakens, and clients notice the inconsistency between advice given and the behavior shown. This gap is damaging.
The other thing to mention—and it really confirms the point you made earlier—is that in a world where every employee has a public platform like this, organizations do not control the narrative. That will be obvious to you and me, but this sets it quite clearly. The story that endures is how people remember being treated when change was unavoidable.
You can’t actually predict what effects that is going to have on Omnicom. It may well be that in this age of polarization and utter cynicism, no one will care about this when they get hired and go work for Omnicom. But this is a firm that I wouldn’t like to work for based on this.
I started my working career in advertising at J. Walter Thompson back in the late 70s. Omnicom has a storied history in its current form, with the legacy brands they keep talking about in the press releases that are all being retired. Doyle Dane Bernbach, BBDO—some of these firms were around when I was at JWT all those years ago. It reminds me that nothing is permanent. The gloss in advertising is often just a veneer. I think they will not gain any credit for this, and the CEO’s reaction, just according to that town hall write-up, was pretty appalling.
Shel Holtz It’s just terrible. As we know, because we report on it every year, employees are still the most trusted source from a company according to the Edelman Trust Barometer. When you have this many employees out talking about what happened to them, telling their stories authentically, that’s what people are going to remember. They’re not going to remember the financial forecast that Omnicom has put forward.
Somebody needs to counsel this guy. I read somewhere that even for the layoff notification he was supposed to participate in, they said he couldn’t because he was having “technical difficulties.” I mean, come on, really? You’re not even going to get that personal message of regret from the leader of the organization?
We’re in a period right now where people are struggling to find jobs in communication. If Omnicom opens some jobs, people will take those jobs because it’s hard to find one right now. But if that pendulum swings and it becomes a seller’s market rather than a buyer’s market again, I can’t imagine a lot of communicators who are going to want to work there. They may find themselves hiring a more mediocre workforce because the best of the best are going to say, “No, I’m really good, the world knows I’m good, I can work anywhere, and I’m not going to go work for those jerks.“
Neville Hobson I think it’s a good point. Another thing to mention is that I was surprised to see the comments on Reddit. There are hundreds, if not thousands, and in a way I wasn’t expecting. I expected a lot of ranting, a lot of ugliness, and maybe trolling. I didn’t see much of that. I saw what I would describe as sheer sadness by many people, and calm acceptance of the awfulness of it all by those who’ve been fired. The two LinkedIn posts I discussed are very much worth looking at, along with the comments.
Layoffs are inevitable, and indeed in the case of this acquisition, they were inevitable. But the communication failure was not inevitable if they had handled it differently. Employees now shape the public narrative in real time. Trust, once lost, quickly becomes an external issue, which is what we’re seeing playing out still. Communication principles apply most when it’s hardest to use them, like this situation, and I think they failed the test totally.
Shel Holtz Yeah, I’ll tell you what, we just recently completed an acquisition here where I work, and in our little two-person communication team in our small billion-and-a-half-dollar company, the communication was far superior to what we see coming out of this behemoth of a communication organization. It’s pathetic.
This is what Zuckerberg always said when he got caught doing something bad: “We’ll have to do better.” He never does, and I doubt that Omnicom will either based on this behavior, but they need to do better. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #492: The Authenticity Divide in Omnicom Layoff Communication appeared first on FIR Podcast Network.
Josh Bernoff has just completed the largest survey yet of writers and AI – nearly 1,500 respondents across journalism, communication, publishing, and fiction.
We interviewed Josh for this podcast in early December 2025. What emerges from both the data and our conversation is not a single, simple story, but a deep divide.
Writers who actively use AI increasingly see it as a powerful productivity tool. They research faster, brainstorm more effectively, build outlines more quickly, and free themselves up to focus on the work only humans can do well – judgement, originality, voice, and storytelling. The most advanced users report not only higher output, but improvements in quality and, in many cases, higher income.
Non-users experience something very different.
For many non-users, AI feels unethical, environmentally harmful, creatively hollow, and a direct threat to their livelihoods. The emotional language used by some respondents in Josh’s survey reflects just how personal and existential these fears have become.
And yet, across both camps, there is striking agreement on key risks. Writers on all sides are concerned about hallucinations and factual errors, copyright and training data, and the growing volume of bland, generic “AI slop” that now floods digital channels.
In our conversation, Josh argues that the real story is not one of wholesale replacement, but of re-sorting. AI is not eliminating writers outright. It is separating those who adapt from those who resist – and in the process reshaping what it now means to be a trusted communicator, editor, and storyteller.
Josh Bernoff is an expert on business books and how they can propel thinkers to prominence. Books he has written or collaborated on have generated over $20 million for their authors.
More than 50 authors have endorsed Josh’s Build a Better Business Book: How to Plan, Write, and Promote a Book That Matters, a comprehensive guide for business authors. His other books include Writing Without Bullshit: Boost Your Career by Saying What You Mean and the Business Week bestseller Groundswell: Winning in a World Transformed by Social Technologies. He has contributed to 50 nonfiction book projects.
Josh’s mathematical and statistical background includes three years of study in the Ph.D. program in mathematics at MIT. As a Senior Vice President at Forrester Research, he created Technographics, a consumer survey methodology, which is still in use more than 20 years later. Josh has advised, consulted on, and written about more than 20 large-scale consumer surveys.
Josh writes and posts daily at Bernoff.com, a blog that has attracted more than 4 million views. He lives in Portland, Maine, with his wife, an artist.
Follow Josh on LinkedIn: https://www.linkedin.com/in/joshbernoff/
Relevant Links
Shel Holtz
Hi everybody, and welcome to a For Immediate Release interview. I’m Shel Holtz.
Neville Hobson
And I’m Neville Hobson.
Shel Holtz
And we are here today with Josh Bernoff. I’ve known Josh since the early SNCR days. Josh is a prolific author, professional writer, mostly of business material. But Josh, I’m gonna ask you to share some background on yourself.
Josh Bernoff
Okay, thanks. What people need to know about me, I spent four years in the startup business and 20 years as an analyst at Forrester Research. Since that time, which was in 2015, I have been focused almost exclusively on the needs of authors, professional business authors. So I work with them as a coach, writer, ghostwriter, an editor, and basically anything they need to do to get business books published.
The other thing that’s sort of relevant in this case is that while I was at Forrester, I originated their survey methodology, which is called Technographics. And I have a statistics background, a math background, so fielding surveys and analysing them and writing reports about them is a very comfortable and familiar place for me to be. So when the opportunity arose to write about a survey of authors in AI, said, all right, I’m in, let’s do this.
Shel Holtz
And you’ve also published your own books. I’ve read your most recent one, How to Write a Better Business Book.
Josh Bernoff
Mm-hmm, yes. So, this is like, the host has to prod you to promote your own stuff. Yes. Yes. So by my two most recent books, I wrote a book called Writing Without Bullshit, which is basically a, a manifesto for, people in corporations to write better. and I wrote build a better business book that you talked about, which is a complete manual for everything you need to do to think about conceive. write, get published and promote a business book. Yeah, so they’re both both available online where your audience can find them.
Shel Holtz
Wherever books are sold. So we’re here today, Josh, to talk about that survey of writers that you conducted, asking them about their use of AI. What motivated you to undertake this survey in the first place?
Josh Bernoff
Well, I’ll just go back a tiny little bit. About two years ago, Dan Gerstein, who is the CEO of Gotham Ghost Readers and a really fantastically interesting guy, reached out to me because he knew my background of doing statistics and said, let’s do a survey of the ROI of business books, get business authors to talk about what they went through to create their business books and whether they made a profit from all the things that followed on that.
So at the conclusion of that project, which people can certainly still get access to that information, at authorroi.com, at the conclusion of that project, it was clear that we could do a really good job together. So when he came to me and said, let’s do a survey about authors and AI. It’s a topic I’ve been researching a lot, talking to many authors about how they use it. And I said, all right, yeah, let’s actually get a definitive result here. And we were really pleased that the survey basically went viral.
We got almost 1,500 responses, way more than we did for the business author survey, because there’s a lot more writers than authors in the world. And because we got such a large response, it was possible to slice that so I can answer questions like how do technical writers feel about AI or is this different between men and women or older or younger people. And so that enabled us to do a really robust survey which people can download if they want. It’s at gothamghostwriters.com/AI-writer, available free for anyone who wants to see it.
Shel Holtz
And we’ll have that link in the show notes as well.
Josh Bernoff
Okay, great.
Neville Hobson
It’s a massive piece of work you did, Josh. I, I kind of went through the PDF quite closely because it’s a topic that interests me quite a bit. And I was really quite intrigued by many of the findings that it surfaced. But I have a fundamental question right at the very beginning, because I’m a writer myself. But I encountered this phrase throughout, “professional writer.” I’m not a professional writer, but I’m a writer.
And I know a lot of communicators who would say, yeah, I’m a professional writer. I don’t think it fits the definition you’re working to. So can you actually succinctly say what is a professional writer as opposed to any other kind of writer that communicators might say they are? What’s the difference?
Josh Bernoff
Yeah, that’s there’s less there than meets the eye and I will describe why.
So, we fielded this survey, and we basically said if you are a writer, you can answer this survey, and we got help from all sorts of people who are willing to share it within their communities. So over 2000 people responded. But of course, you have to disqualify people if they’re not really a writer and the way we define that is, we said, you spend at least 10 hours a week on writing and editing? And somebody who didn’t, I’m like, okay, you’re not really a writer if you don’t spend at least 10 hours a week on it.
And we also looked at how people made their living. So let’s just say you’re a product manager. You’re probably doing a lot of writing, but you wouldn’t describe yourself as a professional writer. So part of what we did was to have people answer questions about what kind of writer are you?
And we had the main categories and we captured almost everybody in them, know, marketing writers, nonfiction authors, ghost writers, you know, PR writers and so on. And although we had not intended to do so, we got almost 300 responses from fiction authors. And we were like, okay, what are we going to do here? Because these people are very different from the people who are writing in a business context or non-fiction authors, but I don’t want to invalidate their experience.
So we basically divided up the survey and we said, most of the responses are from people who are writing things that are intended to be true. And a small group is written from people who are intentionally lying because they’re fiction writers. So then we had an ongoing discussion about what do we call the people who write things that are intended to be true. And Dan Gerstein and I eventually agreed to call them professional writers, which is not a dig on the professional fiction authors, but it’s just a catchall for people who are making their living as a writer and writing nonfiction.
Shel Holtz
Josh, you described in the survey report a deep attitudinal divide where users see productivity and non-users see what you called a sociopathic plagiarism machine.
Josh Bernoff
Thanks. Now, now, wait a minute. I didn’t call it that. One of the people who took the survey called it that. Yes, that was a direct quote. I mean, I just want to comment here that in the survey business, we call responses to open-ended questions verbatims, right? So these are the actual text responses. And because we surveyed writers, these are the best verbatims I’ve ever seen. This is extremely literate.
Shel Holtz
OK, that was, that was a response. Got it. Well, yeah.
Josh Bernoff A collection of people expressing their opinion and the sociopathic plagiarism machine came from one of those folks. Yes.
Shel Holtz
I did like that a lot. But for somebody like me, a communications director managing a team, how do you bridge that gap when half the team might be ethically opposed to the tools that the other half is enthusiastically using every day?
Josh Bernoff
You just tell the other people to go to hell. No, I’m kidding! Now this is, it’s true. So one of the most notable findings of the survey was that people who do not use AI are likely to have negative attitudes about it. So it’s not just like, you know, well, I don’t happen to drink alcohol, but it’s fine with me. No, these people are.
Josh Bernoff
This is bad for the environment. It’s an evil product. There were a lot of interesting verbatims in the survey from people like that. 61% of the professional writers said that they use AI. So this is a minority of people who are not using it, and an even smaller group who are opposed to it. But they are fervently opposed to it. The people who do use it are generally getting really useful things done. A majority say that it’s making them more productive. And the people who are most advanced are doing all sorts of things with it.
By the way, this is really important to note. The thing that everyone’s sort of morally up in arms about, which is people generating text that’s intended to be read using AI, is actually quite rare. Most of the, that was only 7% that did that and only 1% that did that daily. So most people are doing research or they’re, they’re, you know, using it as a thesaurus or, or, using it to analyse material that they find and, and are citing as own background or something like that. it, to come directly at your question though, it is important to acknowledge this divide in any writing organisation.
And I think that the people who are using AI need to understand that there are some serious objections and they need to address that. The people who are not using it, I think, need to understand that perhaps they should be trying this out just so that they’re not operating from a position of ignorance about what the thing can do.
And I think most importantly, the big companies that are creating AI tools need to be a lot more serious about compensating the folks who create the writing that it’s trained on because it is putting the sociopathic plagiarism machine aside, it’s pretty bothersome when you find out that the thing has absorbed your book and is giving people advice based on that and you got no compensation for that.
Shel Holtz
I just want to follow up on this question real quickly. Were you able to quantify among the people who don’t use it and object to it the reasons? I mean, you listed a couple, but I’m wondering if there’s any data around the percentage that are concerned about the environment, the percentage that, I mean, the one that I keep reading in LinkedIn posts is it has no human experience or empathy, which I don’t understand why that’s a requirement for say earnings releases or welcome to our new sales VP, but nevertheless.
Josh Bernoff
Yeah, I going to say that describes a bunch of human writers too. They don’t seem to have any empathy. So we looked at one of the questions that we asked is how concerned are you about the following? And then we had a list of concerns. And it’s interesting that they divide pretty neatly into things that everyone is concerned about and things that the non-users are far more concerned about. So for example, the top thing that people were concerned about was, and I quote, AI-generated text can include factual errors or hallucinations. So even the people who use it are like, okay, we’ve got to be careful with this thing because sometimes it comes up with false information.
For example, if you ask it for my bio, it will tell you that I have a bachelor’s degree in classics from Harvard University and an MBA from the Harvard Business School, and I’ve never attended Harvard. So it’s like, no, no, no, no, no, no, that’s not right!
On the other hand, there are some other things where there’s a very strong difference of opinion. So for example, question, AI generated text is eroding the perception of value and expertise that experienced writers bring to a project. 92% of the non-users of AI agreed with that, but only 53% of the heaviest users of AI agreed with that. So if you use AI a lot, it’s like, well, actually, this isn’t as big of a problem as people think.
The environmental question, I think that non-users, 85% of them were concerned about its use of resources, but only 52% of the heavy users were concerned about that. And I want to point out something which I think is probably the most interesting division here. If you ask writers, should AI-generated text be labelled as such, they all mostly agree that it should. But if you ask them, should text generated with the aid of AI be labelled as such, the people who use AI often think, well, you don’t need to know that I used it to do research, because it’s not visible in the output. Whereas the non-users are like, no, you used AI, you have to label it. So that’s a good example of a place where the difference of opinion is going to have to somehow get settled over time.
Neville Hobson
That’s probably one of those things I would say take a while to do that, given what you see. and I talked about this recently on verification. Some people, and I know some people who are very, very heavy users of AI who don’t check stuff that is output with the aid of their AI companion. That’s crazy, frankly, because as Shel noted in our conversation, the latest episode of FIR podcast, your reputation is the one that’s going to suffer when it’s when you get found out that you’ve done this and haven’t disclosed it.
But it also manifests itself in something, you know, the great em-dash debate that went on for most of this year. Right. But I wrote a post about a couple of weeks ago about this and about ChatGPT’s plan saying you can tell it not to use em-dashes.
And my experience is I’ve done that and it still goes ahead and does it. It apologizes each time, it still goes ahead and does it, you know. But you know what? That post produced a credible reaction from people. 40,000 views in a couple of days. That’s for me, that’s a lot, frankly. And I did an analysis, which I published just a few days ago, that showed the opinions people have about it are widely divisive.
Some see it as, I’m not going to give up my whole heritage of writing just because of this stupid argument to others who say you’ve got to stop it because it doesn’t matter if it got it from us in the first place, it signals that you’re using AI, therefore your writing is no good. That kind of discussion was going on. So I’d see this is continuing. It’s crazy. looking at the data highlights, there’s some really fascinating stuff in there, Josh, that caught my eye.
The headline starting with the right to see AI is both a tool and a threat. And yes, that’s quite clear from what you’ve been saying, but also this hallucinations concern 91% of writers. And I think that’s true across, you no matter how experienced you are, it concerns me, which is why I’m strongly motivated to check everything, even though sometimes you think, God, do it, don’t don’t question, just do it.
I reviewed something recently that had 60 plus URLs mentioned in it. And so I checked them all, and 15 of them just didn’t exist or 404s or server errors. And yet the client had issued it already and without checking that kind of thing. Stuff like that. So you’ve got a job to educate them.
So I guess this is all peripheral to the question I wanted to ask you, which is that correlation that comes across in the data highlights between AI usage and positive attitudes towards it and as opposed to the negative attitudes, but the users are very highly positive.
How should we interpret this divide? I guess is the question you may have touched on this already, actually, I think you may have actually, is it just a skills gap? Is it a cultural gap? Or what is it? Because the attitudes that are different, I guess, like much these days seems to me to be quite polarised, strong opinions, pro and con. How do we interpret this?
Josh Bernoff
All right, so I want to go back to a few of the things that you said here. I have some advice in my book, Build a Better Business book, and it’s generally good advice about checking facts that you find, finding false information on the internet has always been a problem for people who are citing sources.
There used to be a guy called the numbers guy in the, Carl Bialik in the Wall Street Journal, who would actually write a column every month about some made up statistic that got into print. All that we’ve done is to make it much more efficient. But people do need to check. And it’s interesting. You learn when you use these tools that it’s subtle. If you click and say, OK, that is a real source, that’s fine.
But often, it will tell you that that source says X or Y and then you go and you read it and you’re like, no, it doesn’t actually say that. So yes, you are now citing a source that when you go look at it says the opposite of what you thought it said. Real professional writers know that that is an important part of their job and it just happens to be easy to behave incompetently and irresponsibly now.
But believe me, I deal with professional publishers all the time and there are all these clauses now in their contracts which basically say you have to disclose when you’re using AI and if there’s false information in here then you’re responsible for it and we might not publish it. I will say this, so let’s just put this in a different context. So think about Photoshop.
Okay, when Photoshop started to become popular, people were like, wait a minute, we can’t believe what we see in pictures. Maybe the person doesn’t have skin that’s all that smooth. Maybe that background is fake. But in context where you’re supposed to be doing factual stuff, like a photo that’s in a magazine, there’s safeguards against this and the users have learned what is legit and what isn’t. And I think also that the readers have learned that, okay, we have to be a little skeptical about what we see. This AI has made it possible to do that with text way more easily, but it’s still the case that you, as a reader, you need to be skeptical and as a user, you need to be sophisticated about what you can and can’t do and what is and is not legit.
I do these writing workshops with corporations. I’m doing one next week with a very large media company. And I’m trying to help them to understand, start with clear writing principles and use AI to support them as opposed to use it to substitute for your judgment, generate crap, and then do a disservice to the poor people who are reading it.
Shel Holtz
I am always amused when I see people expressing such angst over AI generated images taking money from artists. And I didn’t hear the same level of anxiety when CGI became the means of making animated movies. What happened to the people who inked the cells? They’re out of a job. No, Pixar got nothing but praise.
Josh Bernoff
Yeah, I know. Right. Right. They should. Yes, yes Yes, right and it’s like no no, they should have actually gotten 26,000 dinosaurs in that scene and I’m like You you were entertained admit it and you know that they’re not real and that’s it…
Shel Holtz
Yeah. Josh, your data shows that thought leadership writers and PR and comms professionals are the heaviest users of AI. Thought leadership writers, 84% of them and 73% of PR and comms professionals are using AI in their writing. Journalists are somewhere around half of that at 44%.
Did you glean any insights as to why the people who are pitching the media are using this more than the people being pitched?
Josh Bernoff
I have some theories about that. What I’m about to tell you is not supported by the data, although I could go in and start digging around. There’s infinite insight in here if I do that. So I think journalists are a little paranoid about it. And the fact that, yes, 44% of the journalists said that they used it, but only 18% said that they used it every day, which is at the very bottom of all the professional writers.
So I think they are not only concerned about their livelihood, but also that they don’t wanna make a mistake. They don’t wanna get anything into print that’s false. Whereas if you look at the thought leadership writers and the PR and comms professionals, it’s a simple question of volume. These people are under pressure to produce a very large amount of information.
And I can tell you as a professional writer that that there are certain tasks that you really would rather not spend time on if an AI can do it. So if you’re gathering up a bunch of background information and perplexity does a better job on contextual searches than Google, which it absolutely does, then you’re probably going to use it.
Now, there is the risk that these people are basically generating large quantities of crap and then sharing it. But I think that that rapidly becomes unproductive. If you’re basically spamming people with AI slop, then they will immediately become sort of immune to that, and then you lose trust and at that point you’ve destroyed your own livelihood.
Neville Hobson
Yeah, absolutely. I want to ask you about one of the other finds you had in here about ChatGPT is the clear leader amongst all writers. 76% using it weekly. I use ChatGPT more than any other tool. I’m very happy with it. It does what I want. But in light of how fast things move in this industry, how things change. How do you see that shifting or does it not actually matter at the end of the day which tool you use as long as it delivers what you want from it?
Josh Bernoff
Well, what you have here is people spending hundreds of millions of dollars to become the default choice, the sort of dominant company here. And if you look at past battles of this kind to be like, who is the top browser or what’s the top mobile operating system, this is a land grab.
If you sit out and wait and see what happens, you could very easily end up on the sidelines, which is why there’s so much money flooding into this. ChatGPT definitely has an early lead, but there was an article in the Wall Street Journal yesterday, I believe, about the fact that they’re very concerned about Google. And the reason is on a sort of features and capability basis, Google is Google better?
It depends on what day it is, they keep making advances. But it does integrate with people’s basic use of Google in other ways, and for example, use of Google in email. And wait a minute, have we never heard this story before? Where a company that has a dominant position in one area attempts to leverage it in another area? Gee, that’s like the whole story of the tech industry for the last 30 years!
Josh Bernoff
The same is true, my daughter works in a company that uses Microsoft products, which is very common. And so everybody in that company is using Microsoft Copilot because they got it for free. There’s this, if you ask me who is going to have the top market share in 18 months, I have no clue, but I don’t think that ChatGPT is necessarily in a position to say, ours is clearly better than everybody else and so everyone will use what we have.
I will point out that the, I’m trying to remember if I have the number on this, but the average person who is using these tools in a sophisticated way is typically using at least three or four different tools. So just like you might use Perplexity for one web search and Google for another, you might decide to use Microsoft Copilot in some situations and use Google Gemini in another situation.
Neville Hobson
It’s interesting that because I started using Copilot recently through a change of how I’m doing something for one particular area of work I’m interested in. And it blew me away because I’m using Copilot, it’s using ChatGPT5. So and I see, I sense the output I get from the input I give it is in a similar style to what ChatGPT would write.
So I’m impressed with that and I haven’t gained any further significance to it. Maybe it’s coincidental, but I quite like that. So that’s actually getting me more accustomed to Microsoft’s product. So these little things, maybe this is how it’s all going to work in the end.
Josh Bernoff
Yeah, yeah, I will point out that professional writers that I talked to are very enamoured of Claude as far as the creation of text. And definitely if you’re doing a web search, Perplexity has got some pretty superior features for that. I find myself often using telling ChatGPT, don’t show me anything unless you can provide a link, because I’m not going to trust you until you do that. And I’m going to check that link and see what it really says.
So that’s, you know, the, the, the development of specialised tools for specialised purposes is absolutely going to continue here.
Shel Holtz
Yeah, I’ve been using Gemini almost exclusively since 3.0 dropped. I find it’s just exponentially better, but I’m sure that when ChatGPT releases their next model, I’ll be back to that. In the meantime, I did see Chris Penn commenting, I think it was just yesterday on that Wall Street Journal article pointing out that it’s baked into Google Docs and Google Sheets and all the Google products, whereas OpenAI doesn’t have any products to bake it into.
And that’s a clear advantage to Google. But Josh, you revealed in the research that 82% of non-users worry that AI is contributing to bland and boring writing. What I found interesting was that 63% of advanced users felt the same way, that it’s creating this AI slop.
So as a counsellor to writers, how would you counsel people, our audience is organisational communicators. So I’ll say, would you counsel organisational communicators? When cutting through the noise is vital, you need to get your audience. I deal mostly with employee communication, and we need employees to pay attention to this message, despite the fact that there are so many competing things out there, just clamouring for their attention. How do you avoid the trap of this bland and boring writing when you’re so desperate to cut through that clutter and capture that attention?
Josh Bernoff
Yes, well, large language models create bad writing far more efficiently than any tool we’ve ever had before. So, and of course, I’m talking to both corporate writers and professional authors all the time about this. And so basically, the general advice is that the more you can use this for things behind the scenes, the better off you are and the more you use it to actually generate text that people read, the worse off you are.
I’m gonna give you a very clear example. So I am currently collaborating with a co-writer on a book about startups for a brilliant, brilliant author who really knows everything about startups, has an enormous background on it. And he has insisted that I use AI for all sorts of tasks. In fact, he’s like, you know, why are you wasting your time when you could just send this thing off and tell it to do the research? And we’ve done some spectacular things like I had a list of startups and I told it to go out on the internet and get me a simple statement about who they are, what financing stage they’re in, what category they’re in.
And it goes off and it does that. That would have taken me days. But because this guy is intelligent, there’s a reason he’s hired me and not replaced me with AI because once it’s time to actually create something that’s gonna be read by people, we have to rewrite that from beginning to end. That’s, as a professional writer, that is my, how I make a living. And what I write is the complete opposite of bland and boring. And he doesn’t want bland and boring. He wants punchy and surprising and… insightful.
So I, you know, you can both say use AI for all of this other stuff and don’t you dare publish anything that it creates. and I feel like that is generally the right advice that everybody is going to end up where I have ended up, which is, even in a corporate environment, it can support you, but you’re not using it to generate texts that people are going to actually read.
Neville Hobson
It’s a really good point you’ve made there I think because one of the elements one of the findings in the survey report, AI powered writers are sure they’re more productive and I definitely sit in that category. I’m absolutely convinced I’m probably in that what is it 92% or whatever it is of the advanced users who think so how do I prove it?
Well it’s not so much the output it’s the quality. It kind of tunes your mind into some of the reports that you read or what others are saying elsewhere that use AI tools to support you in doing the stuff that is what AI is better at than humans. Unstructured structured data, whatever it is, finding patterns, all that stuff that we can all read about. And you do the intellectual stuff, the stuff humans are really good at.
Josh Bernoff
Absolutely.
Neville Hobson
And they sound great phrases and sentences. And I’ve said to lots of people, I don’t see too many people doing that. So they’re obviously not in the advanced stage, let’s say. I find it hard to believe, frankly. Really I do. In conversations I’ve had during this year on those who diss this, who say this is like some of your respondents have said, you know, it’s the, what is it, psychotic plagiarism machine or whatever it was, the stuff…
Josh Bernoff
Sociopathic, but yes.
Shel Holtz
Both things can be true.
Neville Hobson
…sorry, sociopathic, but it’s where they can, but it amazes me, it truly does. And I think if we’ve got this situation where clearly there is evidence that if you use this in an effective way, it will help you be productive.
It will augment your own intelligence, to use a favourite phrase of mine. So AI is augmenting intelligence, not artificial. And yet that still encounters brick walls and pushbacks on a scale that’s ridiculous. Worse in an organization when that’s at a leadership level, I would say.
So how do we kind of make this less of a threat as it’s seen by others, or is this part of the issue that those naysayers just see all this as a massive threat?
Josh Bernoff
Well, boy, that’s a deep question. So first of all, I always start with the data here, because I want to distinguish between my opinions and the data. And the data says that the more you use AI, the more likely you are to say that it is making you more productive. And as you said, 92% of the advanced users said that it made them more productive. And interestingly, 59% of the advanced users said that it actually made the quality of their writing better.
So it’s not just producing more, but producing better stuff. And one more statistics here. We actually asked them how much more productive. The average across all the writers who use it is 37 % more productive, but like any tool, you need to get adept at it and learn what it’s good at and what you can use it for. And this technology has advanced way, way ahead of the, the learning about how to use it.
So there has to be a, basically a movement in every company and all writing organizations to teach people the best way to take advantage of it and what not to do. And in fact, one of the things that I recommended and that I tell some of the corporate clients I work with is find the people who are really good at this and then have them train the other people.
Because there’s nothing better than somebody saying, okay, here, let me show you what I can do with this.
I’ll just give you an example. So this report itself, obviously people are saying, well, did you use AI to write the report? I started out trying to use AI to analyse the data and I found that it was not dependable. I’m like, okay, I’m gonna have to calculate these statistics the old-fashioned way with spreadsheets and data tools. Every single word of the report was written by a human, me, at least most people still think I’m a human.
But we had, you know, thousands of verbatims to go through. And the person to whom I delegated the task of finding the most interesting verbatim used AI to go in and find verbatims that were interesting, had certain, there were some positive ones, negative ones, you know, had some diversity in terms of who they were from. So we weren’t quoting all technical writers. And that’s a perfect use to go into a huge corpus of text and pull out some of the interesting things out of there because that would have taken days.
I can’t help mentioning here because in preparation for doing this report, I interviewed some of the most advanced writers that I knew, including Shel. And one of my favourite examples is a very intelligent woman who, Shel, I know you know, is completing her doctoral degree right now. And she told me that the review of existing research is an enormous element of this, and that using AI to help summarise and compare the existing research would save her three years in the completion of her doctoral degree.
You cannot walk away from that level of productivity. And she’s full of enormously creative ideas. So this is not a bad writer. This is an excellent writer, but what she’s doing is she’s saying, I had this brilliant idea. Hey, is there anything in the literature that’s similar to this? wait a minute. These people came up with the same thing. So I can’t claim the authorship. it went across all the research and nobody else is saying that. great. This is an original thing I can include. That’s a smart way to use it.
Shel Holtz
Yeah, just this past week, I interviewed our new safety director, just came on board. I used Otter AI to do the interview. I like that because I’m able to focus on the interview subject rather than scribble notes. And what I did was uploaded the transcript of the interview that I downloaded from Otter into Gemini. And I said because the interview led to a lot of digressions and a lot of personal back and forth that interrupted the substance of what we were trying to get to.
So I just said, clean up this transcript, get rid of everything that doesn’t have to do with his coming on board at our company as the new safety director, his background and all of that, and then categorise it. But don’t change any of his words, right? I want the transcript to be exact. And it did exactly what I asked it to do.
For me to take that transcript… well, first of all, for me to take all those notes and then put it in some sort of usable form before I even start writing the article would have taken a considerable amount of time. And yet it didn’t mess at all with what he was telling me in response to my questions. And I was able to use that to produce an article that I wrote.
One of my favourite uses though, as a writer, is when there’s a turn of phrase that I want to use and I can’t quite draw it out. I know what it is. It’s right there. So I’ll share what I’m writing about. And this is what I’m trying to say. And there’s a turn of phrase I’m thinking of. What is it? And it’ll say, well, it might be one of these. And almost always from the list it gives me, that’s the one I was thinking of.
Josh Bernoff
This is a way better thesaurus than anything else I’ve ever used. And at the age that we’re at, sometimes you can’t, you know there’s a word and you can’t bring it to mind. I’m like, yeah, that was a word I was looking for.
Shel Holtz
Yeah. Josh, you found that 40% of freelancers and agencies say that AI has eaten into their income. If you were advising, say, a boutique PR agency today on how to survive in 2026, what’s the one pivot that you would advise them that they need to make based on this data?
Josh Bernoff
I think you need to focus on talent that has two skills. One is, clear and interesting writing skills are even more valuable than they used to be. So, you know, if you say, well, who are the best writers in our organisation, do everything you can to hang on to those people, because you’re going to need that to continue to stand apart from the AI slop.
And then the other side of that is to become as efficient as possible with AI for the rote tasks. So you also want people who are really skilled at using these tools to conduct research tasks. I interviewed a woman at the gathering of the ghosts, which is the event where this research was first presented. She matches up ghost writers to to author clients. And she gets like a background, briefing on every single person that she goes and pitches. And it’s really good at that.
So when she gets on the phone with these people, they’re like, wow. She’s really smart. She, she did a whole lot of homework here. And this is the kind of person I want to work with. Okay. It has nothing to do with her writing ability. It has to do with her ability to take advantage of these tools and, yeah, I think that we’re going to be able to get more done with fewer people. which is a, tail is all this time, really. That’s just, that’s just the direction that things go with automation.
But I, I have, I can’t resist pointing out here on the flip side. I think, a bunch of people, including publishers are now delegating work to AI and laying people off and it’s doing a bad job. I ghost wrote a book recently where the copy editing came back and I was like, this is inadequate. This is a terrible job. This was obviously done by a machine and done badly by a machine.
And my client and I decided that in order to avoid errors, we would hire our own professional copy editor because the publisher had skimped in exactly the wrong place. And the professional copy editor did a fantastic job. It cost a bunch of money, but we were much happier with that.
Neville Hobson
To continue this theme slightly, think I had the question, which I think Shel answered part of it, but the page in the report with the headline, nearly half of writers have seen AI kill a friend’s job. And I found that interesting because there’s constant talk in some of the mainstream media, some of the professional journals too, is AI going to replace jobs? One report comes out and before you know it, the headline saying yes, it is. The other report comes out saying no, it’s not.
But these are intriguing, I found, that they’re actual real world examples you’ve got from people who answered the questions you asked them in the survey. Where it says only 10% of corporate workers have had AI driven layoffs at their organization, but 43% of writing professionals know someone who has lost their job to AI. So is this a trend that’ll continue this way, do you think? how would you interpret this overall picture that you’ve shown? This particular page, page 20 in the report.
Josh Bernoff
Okay. Okay. Yes. So it was interesting. We expected to hear a lot more direct response of, yes, they’ve done layoffs of my work as a result of this. And the fact that only 10 % of the people who worked in corporations, which includes media companies, said that they had seen this was an indication to me that at least at the time we did this survey in August and September, that that was not a huge trend.
The fact that a lot of people know somebody who lost their job, you know, if one person loses their job and they have 12 friends, then we’re gonna get 12 positives on that. But that having been said, I’m not convinced that even if we did this survey now, which is what, like four months later, that we would get the same results.
It’s clear to me that there’s a lot of layoffs happening that a significant amount of it is AI stimulated. A certain amount of that is coders, for example. They need fewer coders to do the same programming now. My daughter got a computer science degree a few years ago because it was like everyone knew that that was how you got a job and you know, it’s not so easy right now.
I think that we’re going to see two things. First of all, we’re going to see this trend of people being laid off because AI includes productivity across the entire employment spectrum. It’s a huge trend that’s likely to happen. But I also think that you’re going to find companies backtracking and saying, oh my God, we thought we could have all this productivity, but it turns out that we need more humans here than I realised and we need to go back and bring them back.
I feel that it is driven by a certain amount by investment mania to cut back expenses and that in the end, as in so many cases, when you replace people with automation, you end up with a poor quality result.
Shel Holtz
I want to talk about fiction authors for a minute. And I find it intriguing that they are so universally anti-AI. Neville and I are both friends with JD Lasica. I don’t know if you know JD. He’s got a product out there called Authors AI. It’s a model that he and his partners have trained. It’s not using ChatGPT or Gemini or any of the large frontier models.
But what you do is you feed your novel to it, presumably in a first draft, and it analyses the novel against all of the criteria that has been trained on about what makes a good novel and gives you a report about, you need to do a better job of character development here, the story arc is weak here, things like that. So, I mean, there are uses for fiction writers beyond actually writing for you, but you did note that they almost universally detest it. Only 42% use it and they are…
Josh Bernoff
No, no, no, no. Let’s be clear here. It was the non-users among the fiction authors who almost universally detested.
Shel Holtz
Okay, I misread that. Emphatically angry was the language that jumped out at me. I’m wondering for those of us in business writing, is there a lesson we should take away from fiction writers about the preservation of the soul of a narrative?
Josh Bernoff
No, no, it’s interesting to me. So I’ve been conducting surveys now for probably 20 years. And one of the main things that you learn is that it’s never black and white. There’s never a hundred percent of the people that agree with anything. There’s never 0% of the people that agree with anything until this survey, when I found that fictional authors that do not use AI are as close as you can get to unanimous about it being a horrible, evil thing.
So yes, I was like, 100% of the people agreed with this? I’ve never seen that in my entire career of analysing surveys. But to give you a little bit more thoughtful answer than no, soulless fiction is boring and nobody wants to read it. And that happens to also be true of soulless nonfiction writing.
So let’s just take this report. If I used AI to generate the text in this report, you wouldn’t be talking to me because I found the most interesting things in the most interesting language to describe it. And the same applies if you’re writing about, you know, should we adopt a new project management methodology?
That’s a story, you know? We have this problem. This solution was suggested to us. We compared this to that. It looks like this is going to save money, but here are the things that I’m really worried about. This is an emotional story. And really, all nonfiction writing needs to have a story element to it. until AI becomes a little bit less soulless, which may never happen, you still need humans to tell those stories.
Neville Hobson
Yeah, I agree with that. So before we get to that question of what question should we have asked you, I’m looking at page 28, what these findings mean for the writing profession. And it’s really well done this, Josh, you succinctly condensed it all. But to avoid me trying to interpret what you said, can you tell us a summary of what these findings do mean for the writing profession?
Josh Bernoff
Well, thank you.
You know, it’s interesting Neville, there was always a section like that at the end of my reports at Forrester Research, because that’s what they were paid for. And in this case, I said, no, I’m just going to do the data. And my partner here, the people at Gotham Ghostwriters, Dan was like, why don’t you write something about what this means for the industry? I’m like, I can do that. Good idea! Okay.
So I wrote this and I think that in corporate environments, it is important now to understand what this is good for and to take the people who’ve become advanced at it and use them to help train other folks. And it’s especially challenging, I think, in media organisations because on the one hand, they are under enormous pressure, profit pressure.
You know, think about a newspaper or magazine or publisher. It’s very difficult for them to be profitable, highly competitive environment. If they can cut costs, they’re gonna try and find a way to do it. On the other hand, it is exactly their content that’s getting hoovered up and ripped off.
So they need to have a balance here, think on a political basis, they need to lobby and basically do everything possible to preserve the value of their content and not have it be used for training purposes without any compensation. But I also think they have to be very prudent in what kinds of things that they take AI to do and what they don’t. Just like the people at that publisher who use the AI copy editing that did a terrible job. If they economise in the wrong places, it’s gonna be a very bad scene.
I can’t help but drop this in here. I learned recently about a romance bookstore, a bookstore that sells romances, a physical bookstore. And they’re using AI to analyse trends, figure out which books to stock and how to organise them and what to put into their marketing. And I just thought that was fascinating because the content is as human and emotional as you can be, and yet they figured out a way to use AI to be successful.
Shel Holtz
That’s really interesting. So let’s ask you that question now, Josh. I mean, we could spend another hour here, but what question didn’t we ask that you were hoping we would?
Josh Bernoff
I think that the most interesting finding here, and there were so many fascinating findings, so that’s saying something, was in the questions that we asked about what tasks do you do with AI? And what really amazed me was the huge variety of tasks. So I wasn’t surprised that research was, but I’m looking over to the side here just to make sure I get the information exactly accurate.
I wasn’t surprised that replacement for web search and finding words or phrases of thesaurus was something that people wanted, but I was surprised by how many people use AI as a brainstorming companion. That they’re actually asking questions about Can I write it this way or that way? What suggestions do you have? And getting great ideas back on that. To summarise articles is very popular, but you know, generate outlines, find flaws and inconsistencies. As a devil’s advocate, deep research reports. mean, this is, the people who get good at this, they keep coming up with new ways to use it.
So I think that if you look at what’s happening in the future, all this debate about AI-generated slop getting published is much less interesting to me than the capability that this has to make writers more powerful, smarter, more interesting, come up with more ideas, and to basically be an infinitely patient assistant that can get you to be the best writer you can possibly be.
Shel Holtz
Yeah, that devil’s advocate is one of the very first things I used it for when ChatGPT was first introduced. I would say I’m planning on communicating this this way. The goal, the objective is to get employees to think, believe, do X. What pushback am I going to get from this approach? And nine times out of 10, it would come up with a very valid list of reasons that this isn’t going to work. It would lead me to re-strategise.
Josh Bernoff
Well, Shel, as you know, you can contact me anytime if you need someone to tell you that you’re wrong! But I’m not available at three in the morning, and ChatGPT is so from that perspective, it’s probably better. Plus my rates are much higher than theirs.
Shel Holtz
Josh, how can our listeners find you?
Josh Bernoff
Well, the most interesting thing is to subscribe to my blog at bernoff.com. I actually write a blog post about books, writing, publishing, and authoring every weekday. People say, why do you do that? The only good answer I have is it’s a mental illness, but you may as well take advantage of it. And we shared the URL for this research report and certainly anyone who’s interested in writing a business book, just do a search on build a better business book and you can get access to that.
And certainly if someone is so desperate that they really want a human to help them, I am available for that.
Shel Holtz Thanks so much, Josh. We really appreciate your time.
Josh Bernoff Okay, was really great to talk to you.
Neville Hobson Yeah, a pleasure, likewise, thank you.
The post AI and the Writing Profession with Josh Bernoff appeared first on FIR Podcast Network.
Big Four consulting firm Deloitte submitted two costly reports to two governments on opposite sides of the globe, each containing fake resources generated by AI. Deloitte isn’t alone. A study published on the website of the U.S. Centers for Disease Control (CDC) not only included AI-hallucinated citations but also purported to reach the exact opposite conclusion from the real scientists’ research. In this short midweek episode, Neville and Shel reiterate the importance of a competent human in the loop to verify every fact produced in any output that leverages generative AI.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 491. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and I want to return to a theme we addressed some time ago: the need for organizations, and in particular communication functions, to add professional fact verification to their workflows—even if it means hiring somebody specifically to fill that role. We’ve spent the better part of three years extolling the transformative power of generative AI. We know it can streamline workflows, spark creativity, and summarize mountains of data.
But if recent events have taught us anything, it’s that this technology has a dangerous alter ego. For all that AI can do that we value, it is also a very confident liar. When communications professionals, consultants, and government officials hand over the reins to AI without checking its work, the result is embarrassing, sure, but it’s also a direct hit to credibility and, increasingly, the bottom line.
Nowhere is this clearer than in the recent stumbles by one of the world’s most prestigious consulting firms. The Big Four accounting firms are often held up as the gold standard for diligence. Yet just a few days ago, news broke that Deloitte Canada delivered a report to the government of Newfoundland and Labrador that was riddled with errors that are characteristic of generative AI. This report, a massive 526-page document advising on the province’s healthcare system, came with a price tag of nearly $1.6 million. It was meant to guide critical decisions on virtual care and nurse retention during a staffing crisis.
But when an investigation by The Independent, a progressive news outlet in the province, dug into the footnotes, the veneer of expertise crumbled. The report contained false citations pulled from made-up academic papers. It cited real research on papers they hadn’t worked on. It even listed fictional papers co-authored by researchers who said they had never actually worked together. One adjunct professor, Gail Tomlin Murphy, found herself cited in a paper that doesn’t exist. Her assessment was blunt: “It sounds like if you’re coming up with things like this, they may be pretty heavily using AI to generate work.”Deloitte’s response was to claim that AI wasn’t used to write the report, but was—and this is a quote—”selectively used to support a small number of research citations.” In other words, they let AI do the fact-checking and the AI failed.
Amazingly, Deloitte was caught doing something just like this earlier in a government audit for the Australian government. Only months before the Canadian revelation, Deloitte Australia had to issue a humiliating correction to a report on welfare compliance. That report cited court cases that didn’t exist and contained quotes from a federal court judge that had never been spoken. In that instance, Deloitte admitted to using the Azure OpenAI tool to help draft the report. The firm agreed to refund the Australian government nearly $290,000 Australian dollars.
This isn’t an isolated incident of a junior copywriter using ChatGPT to phone in a blog post. This is a pattern involving a major consultancy submitting government audits in two different hemispheres. The lesson is pretty stark: The logo on your letterhead isn’t going to protect you if the content is fiction. In fact, this could have long-term repercussions for the Deloitte brand.
But it doesn’t stop at consulting firms. Here in the US, we’ve seen similar failures in the public sector. There’s one from the Make America Healthy Again (MAHA) commission. They released a report with non-existent study citations to a presentation on the CDC website—that’s the Centers for Disease Control—citing a fake autism study that contradicted the real scientists’ actual findings.
The common thread here is a fundamental misunderstanding of the tool. For years, the mantra in our industry was a parroting of the old Ronald Reagan line: “Trust but verify.” When it comes to AI though, we just need to drop that “trust” part. It’s just verify. We have to remember that large language models are designed to predict the next plausible word, not to retrieve facts. When Deloitte’s AI invented a research paper or a court case, it wasn’t malfunctioning. It was doing exactly what it was trained to do: tell a convincing story.
And that brings us to the concept of the human in the loop. This phrase gets thrown around a lot in policy documents as a safety net, but these cases prove that having a human involved isn’t enough. You need a competent human in the loop. Deloitte’s Canadian report undoubtedly went through internal reviews. The Australian report surely passed across several desks. The failure here wasn’t just technological, it was a failure of human diligence. If you’re using AI to write content that relies on facts, data, or citations, you can’t simply be an editor. You must be a fact-checker.
Deloitte didn’t just lose money on refunds or potential reputational hits; they lost the presumption of competence. For those of us in PR and corporate communications, we’re the guardians of our organization’s truth. If we allow AI-generated confabulations to slip into our press releases, earnings statements, annual reports, or white papers, we erode the very foundation of our profession. Communicators need to update their AI policies. Make it explicit that no AI-generated fact, quote, or citation can be published without primary source verification. And you need to make sure that you have the human resources to achieve that. The cost of skipping that step, trust me, is a lot higher than a subscription to ChatGPT.
Neville Hobson: It’s quite a story, isn’t it really? I think you kind of get exasperated when we talk about something like this, because we’ve talked about this quite a bit. Most recently, in our interview with Josh Bernoff—which will be coming in the next day or so—where this very topic came up in discussion: fact-checking versus not doing the verification.
I suppose you could cut through all the preamble about the technology and all this stuff, and the issue isn’t that; it’s the humans involved. Now, we don’t know more than the Fortune article, I’ve seen the one in Entrepreneur magazine, and the link that you shared. Nowhere does it disclose detail about exactly what it was other than the citation. So we don’t know, was it prompted badly or what? Either way, someone didn’t check something. I don’t know how much you need to really hammer home the point that if you don’t verify what the AI assistant has responded to or the output to your input, then you’re just asking for this kind of trouble.
I did something just this morning, funnily enough, when I was doing some research. The question I asked came back with three comments linking to the sources. A bit like Josh—because Josh mentioned this in our interview—every instruction to your AI goes: “Do not come back with anything unless you’ve got a source.” And so I checked the sources, one of which just did not exist. The document concerned on the website of a reputable media company wasn’t there. Now, it could be that someone had moved it, or it did exist but it was in another location. But the trouble is, when these things happen, you tend to fall on the side of, “Look, they didn’t do this properly.”
So I’m not sure what I can add to the story, Shel, frankly. Your remarks towards the end about your reputation is the one that’s going to get hit. You look stupid. You really do. And your credibility suffers.
I found in Entrepreneur they quoted a Deloitte spokesperson saying, “Deloitte Canada firmly stands behind the recommendations put forward in our report.” Excuse me? Where’s your little humility there? Because you’ve been caught out doing something here. And they’re saying, “We’re revising it to make a small number of citation corrections which do not impact the report finding.” What arrogance they are displaying there. Not anything about an apology—or fine, let’s say they don’t need an apology—but a more credible explainer that at least gives them the sense that they empathize here, rather than this arrogant, “Well, we stand by it.” It’s just a little citation? It’s actually a big deal that you quote as something that either doesn’t exist or is a fake document. Exactly. So I don’t know what I can say to add anything more. But if they keep doing this, they’re going to lose business big time, I would say.
Shel Holtz: It didn’t exist. Yeah, I understand their desire to stand by the report. I have no doubt that they had valid information and made valid recommendations, but that’s hardly the point. The inaccuracies call all of the report into question, even if at the end of the day they can demonstrate that they used appropriate protocols and methodologies to develop their recommendations based on accurate information.
You still have this lingering question: “Well, you got this wrong, what else did you get wrong? What else did you turn over to AI that you’re not telling us about because you didn’t get caught?” Even if they didn’t do any of that, those questions are there from the people who are the ones who paid for this report. If I were representing a government that needed this kind of work, first of all, I would be hesitant to reach out to Deloitte. I would be looking at one of their competitors.
If I had a long-standing relationship with Deloitte, and even if I had a high degree of trust with Deloitte, I would still add a rider to a contract that says either you will not use AI in the creation of this report, or if you do, you will verify each citation and you will refund us X dollars—the cost of this report—for each inaccurate, invalid verification that you submit. I’d want to cover my ass if I were a client based on having done this not once, but twice.
Neville Hobson: Right. I wonder what would have happened if the spokesman at Deloitte Canada had said something like, “You’re absolutely right. We’re sorry. We screwed up big time there. We made a mistake. Here’s what happened. We’ve identified where the fault lay, it’s ours, and we’re sorry. And we’re going to make sure this doesn’t happen again.”
Shel Holtz: “Here’s how we’re going to make sure it doesn’t happen again.” Yeah, I mean, this is like any crisis. You want to tell people what you’re going to do to make sure it doesn’t happen again.
Neville Hobson: Yeah, exactly. So they say—and you mentioned—”AI was not used to write the report, it was selectively used to support a small number of research citations.” What does that mean, for God’s sake? That’s kind of corporate bullshit talk, frankly. So they use the AI to check the research citations? Well, they didn’t, did they? “Selectively used to support a small number of research citations…” I don’t know what that even means.
So I don’t think they’ve done themselves any favors with the way they’ve denied this and the way their reporting has spread out into a variety of other media, all basically saying the same thing: They did this work for this client and it was bad. Didn’t do a good job at all.
Shel Holtz: Yeah. So, I’m, as you know, finishing up work on a book on internal communications. It was originally 28 blog posts and I started this back in, I think, 2015. So a lot of the case studies have gotten old. So I did some research on new case studies and I used AI to find the case studies. And then I said, “Okay, now I need you to give me the links to sources that I can cite in the end notes of each chapter that verify this information.”
In a number of cases, it took me to 404s on legitimate websites—Inc, Fortune, Forbes, and the like. But the story wasn’t there and a search for it didn’t produce it. And I would have to go back and say, “Okay, that link didn’t work. Show me some that are verified.” And sometimes it took two, three, four shots before I got to one where I look and say, “It’s a credible source, it’s a national or global business publication or the Financial Times or what have you, the article is here and the article validates what was in the case study,” and that’s the one I would use. But it takes time, and I think any organization that doesn’t have somebody doing that runs the risk of the credibility hit that Deloitte’s facing.
Neville Hobson: Yeah, I mean, this story is probably not going to be front-page headlines everywhere at all. But it hasn’t kind of died yet. Maybe there’s going to be more in professional journals later on about this. But I wonder what they’re planning next on this because the criticisms aren’t going away, it seems to me.
Shel Holtz: No, and as the report noted, it’s not just the Deloittes of the world. It’s Robert F. Kennedy’s Department of Health and Human Services justifying their advisory board’s decisions to rewrite the rules on vaccinations based on citations that not only don’t exist, but that contradict the actual research that the scientists produced.
Neville Hobson: Well, there is a difference there though. That’s run by crazy people. I mean, Deloitte’s not run by crazy people.
Shel Holtz: Not as far as I know. That’s true. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #491: Deloitte’s AI Verification Failures appeared first on FIR Podcast Network.
Studies purport to identify the sources of information that generative AI models like ChatGPT, Gemini, and Claude draw on to provide overviews in response to search prompts. The information seems compelling, but different studies produce different results. Complicating matters is the fact that the kinds of sources AI uses one month aren’t necessarily the same the next month. In this short midweek episode, Neville and Shel look at a couple of these reports and the challenges communicators face relying on them to help guide their content marketing placements.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz Hi everybody, and welcome to episode number 490 of For Immediate Release. I’m Shel Holtz.
Neville Hobson And I’m Neville Hobson. One of the big questions behind generative AI is also one of the simplest: What is it actually reading? What are these systems drawing on when they answer our questions, summarize a story, or tell us something about our own industry? A new report from Muckrec in October offers one of the clearest snapshots we’ve seen so far. They analyzed more than a million links cited by leading AI tools and discovered something striking.
When you switch citations on, the model doesn’t just add footnotes, it changes the answer itself. The sources it chooses shape the narrative, the tone, and even the conclusion. We’ll dive into this next.
Those sources are overwhelmingly from earned media. Almost all the links AI sites come from non-paid content, and journalism plays a huge role, especially when the query suggests something recent. In fact, the most commonly cited day for an article is yesterday. It’s a very different ecosystem from SEO, where you can sometimes pay your way to the top. Here, visibility depends much more on what is credible, current, and genuinely covered. So that gives us one part of the picture.
AI relies heavily on what is most available and most visible in the public domain. But that leads to another question, a more unsettling one raised by a separate study published in the JMIR Mental Health in November. Researchers examined how well GPT-4.0 performs when asked to generate proper academic citations. And the answer is not well at all. Nearly two thirds of the citations were either wrong or entirely made up.
The less familiar the topic, the worse the accuracy became. In other words, when AI doesn’t have enough real sources to draw from, it fills the gaps confidently. When you put these two pieces of research side by side, a bigger story emerges. On the one hand, AI tools are clearly drawing on a recognizable media ecosystem: journalism, corporate blogs, and earned content. On the other hand, when those sources are thin, or when the task shifts from conversational answers to something more formal, like scientific referencing, the system becomes much less reliable. It starts inventing the citations it thinks should exist.
We end up with a very modern paradox. AI is reading more than any of us ever could, but not always reliably. It’s influenced by what is published, recent, and visible, yet still perfectly capable of fabricating material when the trail runs cold. There’s another angle to this that’s worth noting.
Nature reported last week that more than 20% of peer reviews for a major AI conference were entirely written by AI, many containing hallucinated citations and vague or irrelevant analysis. So if you think about that in the context of the Muckrec findings in particular, it becomes part of a much bigger story. AI tools are reading the public record, but increasing parts of that public record are now being generated by AI itself.
The oversight layer that you use to catch errors is starting to automate as well. And that creates a feedback loop where flawed material can slip into the system and later be treated as legitimate source material. For communicators, that’s a reminder that the integrity of what AI reads is just as important as the visibility of what we publish. All this raises fundamental questions. How much has earned media now underpin what AI says about a brand?
If citations actively reshape AI outputs, what does that mean for accuracy and trust? How do we work in a world where AI can appear transparent, citing its sources, while still producing invented references in other contexts? And the Muckrec and MJIR studies show that training data coverage, not truth, determines what AI cites. So the question, is AI reading, has two answers, I think. It reads what is most visible and recent in the public domain, and it invents what it thinks should exist when the knowledge isn’t there. That gap between the real and the fabricated is now a core communication risk for organizations. How do you see it, Shel? Thoughts on that?
Shel Holtz It is a very, very complex issue. I was looking at a study from Profound called AI Search Volatility. And what it found was that search engines within the AI context, the search that ChatGPT and Gemini and Claude conduct, are probabilistic rather than deterministic, which means that they’re designed to give different answers and to cite different resources, even for the same query over time.
Another thing that this study found was that there is citation drift. That is, the percentage of domains cited in July are not necessarily present in June for the same prompts. You look at these results, the number that weren’t present in June that were in July for Google AI overviews, nearly 60%, just over 54% for ChatGPT, over 53% for Co-Pilot, and over 40% for Perplexity. So 40 to 60% of the domains that are cited in AI responses are going to be different a month later for the same prompt. And this volatility increases over time, goes from 70 to 90 percent over a six month period.
So you look at one of these studies that’s a snapshot in time and it’s not necessarily telling you that you should be using this information as a strategy to guide where you’re going to publish your content if the sources are going to drift. And by the way, a profound study by their AEO specialist, a guy named Josh Bliskolp, found that AI relies heavily on social media and user generated content, which is different from what the Muckrec study found. They were probably getting that snapshot in time where the citations had drifted. So, while I think all these studies are interesting, I think what it tells us as communicators looking to show up in these answers is we need to be everywhere.
Neville Hobson Yeah, I’ve been trying to get my head around this. I must admit reading these reports and the Nature one kind of threw me sideways when I found that because I thought how relevant is that to the topic we’re discussing in this podcast? And so my further research showed it is relevant as the content is being fed back into the system and that’s showing up in social results. You’re right. In another sense, I think you can get all these survey reports and dissect them which way to Christmas.
But they have credibility in my eyes, certainly, particularly Muckrec’s. I find the MJIR one equally good, but it touches on areas that I’m not wholly familiar with. This one in Nature is equally good, quite troubling, I think, that that one shows. Listening to how you were describing the profound report on citation consistency over time, I just kept thinking now about the Nature one as an example, let’s say. What if that sounds great, it’s measuring citation consistency over time, but what if the citations are fake, they’re full of hallucinations, they’re full of invalid information? Where does that sit? That’s my question, I suppose.
Shel Holtz Well, yeah, this shouldn’t surprise anybody who’s been paying attention. AI still confabulates. It’s still at the bottom. I think of the ChatGPT or Gemini that this is still prone to misinformation. They are configured more to satisfy your query than they are to be accurate. So when they can’t find or don’t know an accurate citation, they’ll make one up.
We still have attorneys who are filing briefs with cases that don’t actually exist. So this is the nature of the beast right now. If you’re not verifying the information that you get before you do something with it, that’s on you. That’s not on the AI. They’re telling you that these things still hallucinate. They’re working on it. They hope to have that fixed one of these days, but they’re not quite sure how that actually works. So it’s not like just going in and turning a dial or flipping a switch, the researchers are struggling to figure this out. And if it were that easy, they would have done it by now.
Neville Hobson Sure. Although what you just said does not come across at all in any of the communication you see from any of the chatbot makers, except in four point tight at the bottom, you know, it can hallucinate, you need to do your verification. I don’t hear that clear call to a kind of a warning shot, if you like, from anyone when they’re talking about all this stuff, and that needs to change in that case. I don’t feel that it’s as bad as what I got from what you were saying.
Although the point does rear itself quite clearly and it’s got to be repeated again and again. You’ve got to double check everything that you run through. Well, not run through an AI, but the results you get when you do a search. So, you know, it’s all very well talking about citation consistency of time frame from one month to the next. You’ve got to check that yourself. The question will arise, I think, for many. How do you do that? You might use a chatbot to do it, would you? Of course you would, because it’s a tool you’ve got in your armory, but you’ve got to check that.
Shel Holtz Well, I’ve got Google in my armory too. If I see it make an assertion and has a citation, I’m going to go to Google and look it up. I’m not going to look up the URL that the chat doc presented. I’m going to type in the information about the report or the study or the white paper or whatever it was that is cited and see if I can find it. And then if I can and it’s the right one, I’m going to check and see if the link is the same one that the AI provided.
I did a white paper. I used Google Gemini’s deep research for the first pass of this, it was loaded with citations. Where I spent my time wasn’t in doing the initial research, it was validating every citation that it provided before I passed this along to people. So that’s got to be part of the workflow with these things for now. I hope they fix it one day, but for now, you can’t just crank one of these things out and, you know, submit it to a judge or, you know, use it in your medical practice or pass it along to your boss. You have to validate that it’s all accurate.
Neville Hobson Yeah. By the way, didn’t you say once a long time ago now, I expect you didn’t use Google anymore? was only only ChatGPT or Gemini.
Shel Holtz I switch back and forth based on which one is performing better on the benchmarks. I also find that the three primary models, ChatGPT, Google Gemini, and Claude, are better at different things. So I tend to use different ones for different things. But Gemini 3.0 is spectacular. This most recent upgrade that just came out, I think it was last week, wasn’t it? It’s amazing. So I have sort of shifted most of my work using one of the large language models to Gemini right now. I still use ChatGPT for a few things right now. Of course, they’re going to come out with their own big upgrade, probably. Well, there’s some speculation before the end of the year. So we’ll see where they land. But right now, I find Google Gemini is best for a number of things. And by the way, Nano Banana Pro, the image generator. If I were the product manager for Photoshop or for Canva, I’d be worried because you can just upload an image and edit it in Nano Banana with plain text and just tell it what you want done and it does it and pretty awesome. I’ve been playing with it. I can tell you what I did with it, but it’s spectacular.
Neville Hobson Okay, so yeah.
Shel Holtz And fast. You compare that to OpenAI’s image generator, which takes minutes. You’re just sitting there watching this gray blob slowly resolve. Nano Banana’s, boom, there it is.
Neville Hobson Yeah, I see a lot of people posting examples of what it can do. It looks pretty good. So going back to this, though, I think let’s talk a bit about the kind of verification because I think many people know, I don’t know how many it might be, maybe a small number needs some guidance in what to do with that. It’s a quite an additional step, you might argue, in what some people see as the speed and simplicity of using an AI tool to conduct your research, for instance, or to summarize a PDF file or whatever it might be. So what would be your tips for a communicator then on building this into the workflow so that it becomes a natural part of what they’re doing and not a pain in the ass, frankly? So what would you say to them?
Shel Holtz Yeah, well, my tip is to build it into the workflow. It’s still going to save you, well, first of all, it’s still going to save you time. For me to go through and validate the facts that are presented in a bit of AI research takes me less time than it would to conduct the research and draft the white paper. And by the way, I want to be sure everyone understands, I do heavily edit the white paper for language, for style. I rewrite entire sections based on how I would say this. But for that first draft, think that’s the point is that you have to look at these as a first draft. This is why we have interns, right? Is to crank out first drafts of things and save us the time. And I still think that metaphor for AI being a really smart intern who doesn’t go home at the end of the day, doesn’t need a paycheck and just works 24 hours and never gets sick. I think that’s an apt metaphor.
But to just ignore the need to review these things and think it’s going to give you a finished product, that’s a mistake. And you need to come up with a workflow, define your own, but it has to include validation of the information that it provided. If it doesn’t, you’re setting yourself up for some real grief. I mean, if you share the results of that with somebody who is important in your career, in your life, and they make decisions based on it that turn out to be bad decisions because it was a confabulated citation, then that’s going to roll right back on you. So you have to build it into the workflow, just like any other workflow. This is the step that comes after the first step.
Neville Hobson I wonder if this is, tell me what you think, is this significantly more concerning if you’re in academia, say, or working for scientific firm in the science side of things, where peer reviewed, citation-led work on research for medical breakthroughs, or whatever it might be, or scientific discoveries, typically takes months, if not years to go through a process. What would you do if you were in that situation where you are, I know they’re relying on this and this is now emerging that academic papers in particular, well becoming what, untrustworthy? That’s to me is a pretty big deal if many people see it that way. I’m just curious how you discuss that with someone.
Shel Holtz I don’t think my guidance would change. There is an obligation to ensure that what you are sharing is accurate. And if you are using Gen. AI to produce some or all of this report, that obligation extends to fact checking. Mean, hire an intern to do the fact checking so that you have time to do other things. There’s a reason to have an intern. I’ve had this as a question that we hear, if AI can do what an intern can do, what will an intern do? And the answer may be validate what the AI cranks out.
But the risk is so severe that this just needs to become a matter of routine. And especially in science, where these things can be translated into medicines and treatment protocols and the like, you don’t want to be responsible for people getting sicker or dying because you had a confabulated resource or a citation that you didn’t check before you moved on to the next step with this. And if the peer review of the document that you have created produces those errors, if the peers that are reviewing it find the fictitious citations or the wrong citations, it’s your reputation that’s on the line. No one’s going to blame the AI. They’re going to blame you. So your credibility is on the line.
One other point I want to make here in terms of what I would recommend, I would go back to Ginny Dietrich’s PESO model, paid, earned, social and owned, and recognize that that model hasn’t changed in the age of AI. If you want to be cited, don’t chase the shiny object of the latest report that says, it’s reading this, it’s reading that. The fact that it shifts from month to month means you need to be in all those places. And before AI, we were paying a lot of attention to the PESO model. And I’d hate to see it fall by the wayside as lazy people think they can get away with just doing one thing. It’s gotten so easy because AI reads this. Well, that’s this month. Next month, you’re toast.
Neville Hobson Yeah, of course I recall that many people I know still now talk about I don’t need an intern anymore because I have an AI.
Shel Holtz Yeah, well, then they must be spending a, they’re either spending a lot of time validating with the AI produced or they’re putting garbage out into the world.
Neville Hobson I sense not a lot of time, actually. So this then comes back to you got to put in the time. On the some of the work that I’ve been doing recently on research reminds me of something I did, I guess, two weeks ago, which was checking the links in a report that cited this, this and this. And I would say of the 65 or so links I checked 15 404s or not known or not, you know, or even the browser errors you get when it can’t connect to something. So no one had checked those. But I’m okay with that because that’s why I’m here. That’s what I will do. And you’ve got to do it. I agree, you’ve got to do it.
Shel Holtz Well, exactly. Yeah, and the net is still a gain for you, the communicator. You’re still going to save time. You’re just not going to save as much as you think you will if you don’t have to do anything other than write a prompt. There’s more to it than that.
Neville Hobson Right. So I would say that could to conclude on that then we kind of rang the alarm bell about in the in my narrative intro about, you know, this this report in Nature in particular, that flagged up all these fake citations. Just see that then as something if you’ve got a report that that had lots of links in there and all sorts of things being said, you have to manually check each one. And that then comes
Shel Holtz Yes, yes you do.
Neville Hobson back to good old Google probably. But it’s not just the tool, it’s the framework under which you do it in that for instance, minor thing. But if I was doing that, now I would be doing it on a clean interface like the browser I’m not logged into, probably different browser perhaps than I normally using even a different computer if I really wanted to take to extreme level. But it gives you more confidence that your own persona, if you like, is not influencing anything even unbeknownst to you that it might be doing. I mean, I’m not saying it is, but this gives you the, the best way of doing it, I would say so this is best practice. So we should write a best practice guide on this, perhaps. But you know, it’s it’s food for thought.
Shel Holtz It certainly is. And by the way, I think I said paid, earned, social and owned when I was running down what the letters in PESO stand for. The S is actually shared, which includes social, but has a few other things in it. Go look up Ginny Dietrich’s PESO model, folks, and you’ll find it.
Neville Hobson think she did an update to to this for the AI age. I’ve seen to recall a lot of talk about that. Yeah, as well as a tool for ChatGPT that that you could use just, you know, based on that, basically.
Shel Holtz She did. Yeah, she did. I believe she did. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #490: What Does AI Read? appeared first on FIR Podcast Network.
In the long-form episode for November 2025, Shel and Neville riff on a post by Robert Rose of the Content Marketing Institute, who identifies “idea inflation” as a growing problem on multiple levels. Idea inflation occurs when leaders prompt an AI model to generate 20 ideas for thought leadership posts, then send them to the communications team to convert them into ready-to-publish content. Also in this episode:
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, December 29.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi everybody and welcome to episode number 489 of For Immediate Release. This is our long-form monthly episode for November 2025. I’m Shel Holtz in Concord, California.
Neville Hobson: And I’m Neville Hobson in Somerset in England.
Shel Holtz: We have a jam-packed show for you today. Virtually every story we’re going to cover has an artificial intelligence angle. That shouldn’t be a surprise — AI seems to dominate communication conversations everywhere these days.
We do hope that you will engage with this show by leaving a comment. There are so many ways that you can leave a comment. You can leave one right there on the show notes at firpodcastnetwork.com. You can even leave an audio comment from there. Just click the “record voicemail” button that you’ll see on the side of the page, and you can leave up to a 90-second audio.
You can also send us an audio clip — just record it, attach it to an email, send it to [email protected]. You can comment on the posts we publish on LinkedIn and Facebook and elsewhere, announcing the availability of a new episode.
There are just so many ways that you can leave a comment and we hope you will — and also rate and review the show. That’s what brings new listeners aboard.
As I mentioned, we have a jam-packed show today, but Neville, I wanted to mention before we even get into our rundown of previous episodes: did you see the study that showed that podcasting is very male-dominated as a medium?
Neville Hobson: I did see something in one of my news feeds, but I haven’t read it.
Shel Holtz: I heard about it on a podcast — I don’t remember which one — but I found it really interesting because the conversation was all about equity. And I’m certainly not in favor of male-dominated anything, but podcasting is not an industry where there is a CEO who can mandate an initiative to bring women into a more equitable position in podcasting.
This is a medium — let’s face it, even though The New York Times and The Wall Street Journal and other major media organizations have jumped into the podcasting waters — where it’s essentially a hobbyist occupation. You and I started this because we wanted to, and the tools are available to anybody who wants them.
I remember when we started this, one of the analogies we used was trying to walk into a radio station and say, “Hey, I want to have an hour-long show every day on public relations.” You’d be laughed out of the radio station because there’s not an audience big enough to support that kind of content. But here, if you can find an audience, you can have a podcast.
So I don’t know how you go about making this more equitable, but I found that to be an interesting perspective.
Neville Hobson: Yeah, I agree. There are some podcasts I’ve listened to that are hosted by women — which, frankly, are few beyond the realms of kind of “feminine-oriented” content. But there are a couple in our area of interest in communication that are. So they’re out there, but the majority, very much, are men.
Shel Holtz: Yeah. I mean, just in internal communications, there’s Katie Macaulay, and there are a lot of women doing communication-focused podcasts. Maybe if you’re going to look for somebody to make this a more equitable media space, it has to start with the mainstream media organizations that are producing podcasts — The New York Times, The Wall Street Journal of the world.
Neville Hobson: Yeah, over here you’ve got The Times and a few others who have women doing this. They are there in the mainstream media orientation, but the kind of homebrew content that we started out with, I don’t see too many.
Shel Holtz: No.
Well, Neville, why don’t we move into our rundown of previous episodes?
Neville Hobson: Okay, let’s get into it.
So we’ve got a handful of shows. We’re actually recording this monthly episode about a week and a half earlier than we normally would. I think the reason for that, Shel, is something to do with U.S. holidays, your travel, and stuff like that.
Shel Holtz: Yeah, I’m going to be in San Diego next weekend, visiting my daughter and granddaughter because they’re not able to come up here for Thanksgiving. And then the next weekend is Thanksgiving weekend. So that’s why this is early this month.
Neville Hobson: Right. Okay, that explains it.
We are, we are. So, not too many episodes since the last one, but they’re good ones, though, I have to say.
Before we talk about those, let’s mention episode 485, which was prior to the last monthly. We had a comment.
Shel Holtz: We had two that we didn’t have when we ran down this episode in our last monthly episode. The first is from Katie Howell, who says, “Already reward return visits over one-off reach and the clever brands are catching up. If your brief still says ‘go viral,’ you’re chasing a metric that won’t help you keep your job. Repeat engagement with the right people is the proper goal. Less glamorous, miles more useful.”
And Andy Green says, “Good clarification over strategies, but you also need to recognize viral — also known as meme-friendly — is at the heart of effective communications. Also greater recognition of the impact of zeitgeist. Check out Steven Pinker’s latest book, When Everyone Notes.”
Neville Hobson: They were on LinkedIn, I think, weren’t they? That’s where most of them come in.
So, to the ones we did: we have the monthly of October that we did on the 27th of October, when it was published. The lead story we focused on in the headline was “Measuring sentiment won’t help you maintain trust.” Other topics — there were five others — including an interesting one: Lloyds Bank, the CEO and executive team learning AI to reimagine the future of banking with generative AI.
We talked about case studies in a piece that described, “Conduct, culture, and context collide: three crisis case studies,” reviewed in Provoke Media.
Shel Holtz: Yeah, they did 13 or 14 case studies. It was a very interesting article, so we highlighted a couple. And there was more content there too.
Neville Hobson: Episode 487, we published on the 5th of November. This was a really interesting discussion. You and I analyzed and discussed Martin Waxman’s LinkedIn post about slower publishing, deeper thinking, better outcomes — a pivot he’s made with his business and his newsletter.
He left a number of comments, but on the show notes post he left a long comment that was great. We don’t normally get comments on the show notes, so thank you, Martin.
Shel Holtz: Yeah, there were several comments from Martin. I’m going to run through these. He said, “Thank you for having me as a virtual guest once-removed on the episode, Neville. I just listened today and enjoyed your and Shel’s take on my post. You gave me a fresh perspective and I was honored and thrilled to be a conversation topic. And thanks to both of you for holding up the comms podcasting torch all these years and having a lot of fascinating and insightful ideas to share.”
You replied. You said, “Thanks so much, Martin. It was our pleasure. Your post struck a chord with many of us who feel the pace accelerating. It was a great springboard for our discussion, and I’m glad our take offered something new in return. Slowing down to think more deeply about how we use AI feels like the most human move we can make right now.”
But Martin also posted on his own LinkedIn account — and this isn’t short, so bear with us, everybody, as I read through this because I think it’s worth sharing:
“As the first and longest-running communications podcast — and one I’ve been listening to for a long time — this meant a lot. As I listened and heard Shel and Neville’s take on my observations, I gained a new perspective, one I didn’t see when I was writing and revising my post.
“Something I didn’t mention out loud is that it’s been getting more and more difficult to come up with fresh ideas on where AI fits in marketing and communications and the various implications around that, the kind that inspire a person to write. Like social media, it feels like we’ve tipped past the point of saturation.
“As Shel said, we’re now getting drenched by the all-too-familiar commentary and quasi-expert advice swirling around our feeds. That certainly doesn’t diminish the utility of AI or using it where it helps. And I appreciate Shel’s view on how AI helps speed up doing the good-enough tasks that are inherent in all work, to concentrate on the things you want to spend more time on.
“I could also relate to Neville’s comments about saying no to projects that don’t excite you so you can focus on the ones that do. And yes, the three of us are all fortunate to have reached that stage in our careers when we have a little more freedom to pick and choose. I also realize that many people aren’t in that situation.
“As someone who has spent my entire career writing, it’s exciting and a bit frightening to wonder what I’m going to write about next. Yet there’s energy in uncertainty. So thank you to Shel and Neville for having me back as a guest, albeit one who didn’t have to press record.”
Neville Hobson: Really, really super comments that Martin left. Thank you, Martin.
And then our final one before this episode, 488, we published on the 10th of November. I enjoyed this discussion a lot — about Coca-Cola’s generative AI Christmas video that they have done before, but this one got rid of all the people; it was full of bunny rabbits and sloths and all sorts of stuff and those red trucks.
There were plenty of opinions out there, ranging from “What a creative and technical masterpiece this is” to “Utter AI slop.” So we were quite impressed with it and stood back to look at what they were doing rather than being judgmental in any shape or form. But there were plenty of comments, and we had at least one we should mention, right?
Shel Holtz: Yes, from Barbara Nixon, who said, “Thanks for sharing this. I’ll use it as a basis of discussion in my PR writing class next week.”
Neville Hobson: That’s cool. So that’s the content leading up to this one. And of course, now we’re in the November episode that kicks off the next cycle of reporting for the next edition, when I can talk about what we did since this edition.
Shel Holtz: That’s right. And I also want to let everyone know that there is a Circle of Fellows coming up. I would be reporting on this if we were recording at the normal time of the month toward the end of the month, but it hasn’t happened yet.
It is coming up on November 25th, Tuesday instead of Thursday, because Thursday that week is Thanksgiving. So it’s happening at 6 p.m. Eastern Standard Time on Tuesday, November 25th. This is episode 122, and the topic is “Preparing Communication Professionals for the Future.”
It’s a larger-than-usual panel — there are five Fellows instead of four. It’s going to be a good discussion. I think the future — obviously AI factors in here, I think quantum computing does too, as we’re going to talk about shortly in this episode — but also changes in business trends. The zeitgeist is changing, and politics is going to have more of an influence on business. All of these are things that I’m sure we will be discussing.
We look forward to having you join us for that. Of course, if you can’t be there to watch it in real time, it is available both as a video replay on YouTube and as an audio podcast that you can subscribe to right here on the FIR Podcast Network.
And we will now jump into our content for the month — but not until we run this ad for you.
Neville Hobson: So, one of the most interesting shifts happening inside large organizations right now is the move to combine communication and brand under a single leader. We’re seeing this across companies as varied as IBM, GM, Anthropic, and Dropbox, and the trend is accelerating.
According to research cited by Axios, CCO-plus roles — where communication leaders take on brand or marketing responsibilities — have risen nearly 90% in recent years.
What’s driving this? The short answer is volatility, says Axios. AI is changing how people discover what a company stands for, and reputational storms seem to ignite faster and with far greater consequences. A marketing decision that once would have sparked a debate in a meeting room can now become a political flashpoint within hours. That forces the question of who should really own the brand narrative.
Communication leaders are increasingly being seen as the natural fit. They understand stakeholders. They have a risk mindset. And they are often the ones who know how to navigate the cultural and political sensitivities that shape reputation today.
In other words, this is not just about messaging. It’s about trust, judgment, and the ability to connect what a company says with how it behaves. There is still a need for specialist marketing functions, but for many companies, brand stewardship is shifting toward the people who are closest to reputation.
And in a world where AI can bend or reinterpret a narrative in seconds, bringing communication and brand together under one trusted voice feels less like a structural tweak and more like a survival strategy.
So the bigger question for us is what this means for the future of the communication profession. Are we seeing the emergence of a new kind of leadership role — or simply a correction to reflect the reality that brand and reputation have always belonged together?
Shel Holtz: That’s a very interesting trend, and I don’t disagree with it in general. If you look at the big picture, it does make sense. Public relations is all about reputation; it’s all about maintaining relationships with the various stakeholder audiences.
So, as a communicator, you tend to have a big picture. You understand what the reputation is among investors, among the local communities in which your organization operates, among the media, for example, among your customers.
Marketing is all about driving leads for sales in most industries, and they don’t necessarily have that big picture. So it makes sense. And to bring marketing into the communication fold means that you get the benefits of the things that marketing is exceptional at — and branding is one of those things.
Most communicators aren’t involved in developing the trademarks for the organization and the logos and the like — that tends to be marketing, and for good reason. But to have that within the purview of communications enables that chief communication officer-plus to ensure that what’s coming out of that operation aligns with and is consistent with the things that we know drive the reputation of the organization.
You can find some gotchas maybe in the outputs that they’re developing that they wouldn’t have thought of.
That said, I know in my industry, which is commercial construction, the marketing department is not doing traditional marketing. There’s not a lot of effort to drive leads. The relationships with prospective clients are driven through other means. It’s getting to know people through industry contacts and the like. It’s building those personal relationships with developers and owners and the like.
I’ve just celebrated my eighth anniversary where I work, so I’ve seen this in play for long enough to understand that it’s right and it works very, very well.
In my company, the marketing department is also the steward of the brand, and I am fine with that because I’m mostly doing internal communications. I’m also responsible for PR, as far as it goes — media relations and the like — but I don’t have that relationship with the client base. Not at all. It’s rare that I meet a client. Usually I’ll shake hands at a groundbreaking or something like that if I’m out covering it, but by and large, this is something that the marketing department does.
So I’m inclined to say I agree with this, but it depends. And I think there are probably exceptions, and my industry is probably one of them. I’m part of a group called the Construction Communicators Roundtable — 18 or 20 commercial construction companies represented there — and I get the impression that it is the same with all of them. So this may be an industry-by-industry thing.
I don’t disagree with it, but I do think it depends.
Neville Hobson: “I think it depends” is definitely the start point to the discussion on this, I would say. My thought when I read the article — and the reason I included it in the topics for this episode — was precisely that: it does depend.
I’m not sure it is strictly industry-by-industry, meaning that this industry is entirely this way and this one isn’t. It’s probably a mixture. But there are some compelling reasons, I think, why it makes sense to do this even with the argument you’ve made for not doing it, let’s say.
For instance, one interpretation I have from Axios’s research is that the argument is: brand is no longer just a marketing asset. It’s a reputational construct shaped by every stakeholder interaction. That squarely leans toward understanding the impact on reputation — particularly in that communicators are the ones for that, not the marketing person.
It also speaks to the need for a trusted, politically aware leader. This combined role, according to Axios, is shaped by the reality that brand crises are increasingly political. Organizations want leaders who bring judgment, sensitivity, and crisis literacy. And that, in my view, leans much more into the communication person than the marketing/brand person.
And the one I think that is most interesting is the broader reinvention of the communication function. Sorry, marketing folks — this is about communication. The trend echoes the ongoing elevation of communicators as strategic partners rather than support functions, reinforcing the argument that communication is increasingly a governance role, not just an executional one.
Now, that argument would apply to marketing too, but not in quite the same way. Taking into account all of that — particularly the connection with reputation, the political awareness, and I like this term “crisis literacy,” fair enough, it’s a good way of describing it — this is more likely to fit in the bucket where the communicator sits than the marketing one.
And by the way, I’ve seen a number of people’s job titles — communication and brand. And I saw someone recently on LinkedIn who is a Chief Communication Officer and Director of Brand and Reputation, playing exactly to what Axios’s point is.
So yes, “it depends,” but I think there’s a compelling reason why, if you’ve got to pick one person, it should be the communicator.
Shel Holtz: Yeah, and again, I don’t disagree. And still I am untroubled by the fact that marketing owns the brand where I work. And I should clarify: they’re not engaged in traditional marketing. This is not a marketing department like at, say, Procter & Gamble or Coca-Cola. They’re engaged primarily in business development.
So they’re putting together the proposals, they’re responding to the RFPs, they’re preparing the members of the team to go out and be interviewed by the owner or the developer who’s selecting the general contractor. So it is B2B. And, I mean, if they’re not concerned about the organization’s reputation, nobody is.
So this is why I say it depends.
The other point I will make is that even though we are not part of the same reporting structure, we’re pretty well joined at the hip. The VP of Marketing and I talk all the time. He’ll call me into his office to run stuff by me, I’ll run stuff by him. We meet regularly. We have a marketing director right now we are working with incredibly closely to develop a year-long recruiting campaign. We’ve won a ton of work and we need to staff up to support that work.
We’re going to take advantage of her expertise in branding and in marketing to recruits, and we’re going to take advantage of our expertise and the things that we do well. And that collaboration is probably going to produce a much better result than if it had just been one of us or the other of us.
So at the end of the day, I don’t think it matters who has the highest title, as long as everybody’s working together, they’re aligned, and they’re working toward the same goals. So again, I don’t disagree with the sentiment and the underlying foundation of the point that was made in this piece, but I think there are organizations where that is being done without having the communicator necessarily at the top of the food chain.
Neville Hobson: That’s the place where I think the communicator should be — which, of course, plays to the decades-old desire expressed by many in our profession that the communicator needs a seat at the top table.
I guess the concluding point I would say is: anyone listening to this discussion who occupies that joint function and would care to share his or her thinking about all of that — we’d love to hear a comment.
Shel Holtz: Yeah, a seat at the table, yeah.
We would always love to hear comments.
If you feel like AI is sucking all the oxygen out of the room, you’re not wrong. It seems like it was just last week we were talking about blockchain and the metaverse and a slew of other technologies. But while we’ve been fine-tuning prompts and governance, another technology has been quietly moving toward the comms agenda — and that is quantum computing.
The BBC recently framed it as potentially as big, if not bigger, than AI. It’s time to start paying attention to quantum computing and how it matters to communicators.
A quick primer: classical computers process bits, zeros and ones. Quantum computers use quantum bits, known as qubits, which can be zero and one at the same time. That’s called superposition.
If you read the book or watched the Apple TV series Dark Matter — I did, it was really good — you know about superposition, and it has been the foundation of a lot of other science fiction: this idea of being able to be in two places at the same time, quantum superposition.
Two, the zero and one in the same place at the same time can influence each other through something called entanglement — a phenomenon where two or more quantum bits, those qubits, become linked, sharing a single quantum state, so they cannot be described independently even when separated by vast distances.
In some problem classes — chemistry, simulation, optimization, factoring — this enables speed-ups that make the impossible suddenly possible. The machines we have today are still noisy, error-prone. But the security world is acting as if a capable quantum machine will arrive within the planning horizon, which is why standards bodies and platforms are shifting now.
You’ve already seen early signals in consumer tech: post-quantum cryptography, warnings from cybersecurity experts, and quantum-resistant messaging from big platforms. Quantum-resistant messaging uses new encryption algorithms to protect communication from both current and future quantum computers. It’s also called post-quantum cryptography and aims to safeguard data by using mathematical problems that are believed to be difficult for both classical and quantum computers to solve — unlike current algorithms, which can be broken by a powerful enough quantum computer.
In fact, I’m reading a really interesting book right now. It takes place about 150 years in the future, and everything that we today thought was encrypted and nobody would ever see — they’re seeing it all because they have access to quantum computing.
These aren’t just niche issues. They tie directly into how you tell stories, how you prepare for crises, and how you work.
So what does this mean for communicators beyond asking IT if we’re on top of it? I’m going to run through three buckets, and then we’ll tie in how quantum and AI overlap, because that’s where things get especially interesting.
First, storytelling and public understanding. Quantum is famously hard to explain, which makes it vulnerable to hype and confusion. Your job is to translate it without overselling it. “Quantum-safe” doesn’t mean “quantum-proof,” for example, and timelines remain uncertain — we don’t know when you’re going to be able to go to your local Best Buy and get a quantum computer.
You’ll want to build narratives now that help your audience support the idea that your organization is looking ahead, not getting caught flat-footed. Use everyday language. Say, “We’re updating encryption today to protect the data of tomorrow.” That works better than “We’re quantum resilient.” You’ll gain credibility when you help people understand what’s changing and why they should care.
Second, this is all about crisis preparedness and trust. If your organization holds long-lived sensitive data — health records, intellectual property, government contracts — then you need a communications plan for cryptographic agility. That means plain-language FAQs explaining why you are updating encryption, updates to stakeholders as you migrate to approved standards, and scenario planning for legacy data exposure.
Quantum computing introduces a new dimension of risk: the idea that what you publish or promise today could be decrypted or exposed years later. In a crisis, you’ll need to be ready to say, “We anticipated this risk, and here’s what we did.” That anticipatory positioning goes a long way toward preserving trust.
Third, it’s about how communicators can use quantum — and quantum plus artificial intelligence — in our work. Eventually, you’ll have new tools. For example, quantum computing may be able to provide far more advanced modeling of message flows, audience networks, and sentiment behavior, letting you identify optimal outreach paths or refine campaigns under dynamic conditions.
You could simulate scenarios in complex environments more quickly, refining your messages in a what-if matrix classical tools can’t easily handle. These scenarios might include things like stakeholder cascade effects, social media virality, and supply chain disruption.
And as quantum key distribution and quantum-resistant encryption mature, you’ll be in a position to tell audiences, “Our channels use the latest quantum-secure messaging,” which becomes a differentiator from your competitors.
Then there’s the overlap with AI. Quantum computing will amplify AI’s capabilities, helping it crunch deeper patterns faster and handle volumes of data plus complexity that classical systems struggle with. For communicators, that means the analytics layer you rely on — for sentiment, for influence mapping, for risk modeling — will evolve.
AI plus quantum means faster insights, more complex scenario modeling, and new ways to anticipate issues before they explode. So when you describe your comms strategy, you might say, “We use advanced modeling powered by AI today, and we’re tracking quantum-enabled tools so we’re positioned for the next wave.”
The fact is, quantum isn’t just a side story to AI — it’ll reshape AI. Research indicates that quantum computing and AI together massively increase computational speed and breadth of analysis. For example, quantum can remove some of the bottlenecks in data size, complexity, and simulation time that limit today’s AI systems.
For you as a communicator, that means three practical things.
First, what you pitch as “AI-enabled” today will evolve into “AI-plus-quantum-enabled,” and part of the story you tell stakeholders is, “We’re future-proofing so we don’t fall behind.”
Second, monitoring of reputational risk must extend to both AI misuse and quantum misuse — encryption break, advanced surveillance, things like that. The combination raises the bar for your “what could go wrong” list.
And third, your metrics and narrative signals will shift. When AI and quantum intersect, you’ll need to help people understand not just faster insights, but insights from a new class of computing. That means simplified metaphors and careful framing. The message no longer just flows faster — the infrastructure itself is changing. If AI rewrote the message, quantum will test the envelope it travels in.
You don’t need to wait until quantum has fully arrived. You need to start telling that story now. You need to show that your organization is looking ahead, educating stakeholders, and building trust today so that when the change arrives, you’re not scrambling.
Neville Hobson: Well, that’s heavy stuff, Shel.
It’s interesting how Zoe Kleinman, the BBC journalist who wrote this piece, started her article. She says, “You can either explain quantum accurately or in a way that people understand, but you can’t do both.” So I think this is very much in the “accurately” bucket, this discussion.
Shel Holtz: Isn’t it, though? I strive for accuracy.
Neville Hobson: Yeah, and she notes as well, it’s a fiendishly difficult concept to get your head around. I couldn’t agree more. I’ve tried to thoroughly understand this — and maybe I should get rid of the word “thoroughly” because I can’t thoroughly understand it. I need to understand the bits that matter.
So to me, on the one hand I’m thinking, “Fine, this has not arrived yet,” but your point about “get prepared” is a valid one. Although I wonder how many people are going to say, “Well, it hasn’t arrived yet, so what are we going to do? How am I going to do this?” That’s where communicators come in, by the way.
But I think she gives a great example that you really can grasp. Talking about how quantum computers could one day effortlessly churn through endless combinations of molecules to come up with new drugs and medications — a process that currently takes years and years using classical computers.
She says to give you an idea of the scale, in December 2024 Google unveiled a new quantum chip called Willow, which it claimed could take five minutes to solve a problem that would currently take the world’s fastest supercomputers 10 septillion years to complete — that’s 10 with 24 zeros after it.
I mean, just thinking about the number, you cannot imagine how long that would be. The sun would probably have died before it gets to it. This would do it in five minutes.
So it then talks about what it paves the way for — personalized medication, all that kind of stuff.
I don’t think we’re at the stage yet where you could equate this to, “Okay, in your average business, all the business processes they do will be materially impacted by this in a very powerful way.” We’re not there yet, because you can’t explain it like that, I don’t think — hence these very big-picture examples.
Everything I read about quantum talks about this: personalized medication, chemical processing, quantum sensors to measure things incredibly precisely. That’s all coming. It’s not here yet.
So it’s interesting. The examples they give are all wonderful, I have to say, but the mind boggles. My mind certainly does, when you look at so much information on this that you wonder: what on earth are you going to pay attention to in order to get a handle on how it’s going to affect my industry, my company, my job, how we live, my family — all these things? No one’s got that yet, and that’s probably what people want to know — but you can’t yet.
Shel Holtz: No, but it’s close enough that we need to start preparing for it and we need to start communicating about it, especially if you’re in an industry that is computing-intensive in its work. And I’m not talking about customer relationship databases and things like that; I mean in your R&D, for example. And certainly the cryptographic implications are severe on the risk side.
So being ready for that now, rather than scrambling to get ready once it’s actually here, is, I think, an imperative.
You do need to be a physicist or physics-adjacent to really understand this. But I’ll be honest: science has never been my thing. Science and math were my worst subjects in school. The humanities were where I rocked. And I struggle understanding the zeros and ones in fundamental computing — the opening of the gates and all that.
But you know what? I don’t need to know how my carburetor works in order to drive my car. The fact is that these tools are coming, and understanding how they work or not, people are going to be able to use them.
And as I say, it’s close enough. It’s probably within the next 10 years that companies are going to be able to buy quantum compute time, if not buy a quantum computer, that we really need to start thinking about it. We really need to start preparing for it, especially from a security standpoint.
Neville Hobson: Yeah, I get that. I think, though, that people — communicators, this is our area of interest and focus — would need to know: how are we going to do all this when so much of it is theory?
They’re talking about — I’m just looking at the piece here that goes into detail about how to break current forms of public key encryption. Hot topic: security of information. It says here it’s awaiting a truly operational quantum computer. That’s years away. But as the article notes, quoting a cybersecurity expert, “The threat is so high that it’s assumed everyone needs to introduce quantum-resistant encryption right now.” That’s not the case. So there’s probably a lot of hype.
Although it mentions earlier — and I think you might have mentioned — that there’s even more hype about AI. So this was the king of hype before AI emerged.
The prediction I’m reading is that an operational quantum computer could be around the year 2030. So that’s five years away. Okay, in that case, now is the time to get prepared for this, then.
Shel Holtz: That’s pretty fast. And there are operational quantum computers in labs.
Absolutely — there are operational quantum computers in research labs right now. They’re not commercially viable yet, but as you say, the projections run anywhere from five to 15 years. That’s fast; that’s soon.
When we were talking a lot about the metaverse, we were saying the fully operational metaverse was 10 years away — you need to start thinking about that now. Same thing here.
Neville Hobson: Did you notice the concluding paragraph? This is actually where it kind of fits in with the current status of alarm and concern from a political point of view about what certain countries are up to — China, which it calls out as an example.
It says the GCHQ — that’s the UK’s intelligence cyber agency — says that it’s credible that almost all UK citizens will have had data compromised in state-sponsored cyber attacks carried out by China, with that data stockpiled for a time when it can be decrypted and studied, and that you need a quantum computer for that.
For instance, the economic headline in the UK right now — the cause of the kind of unexpected dip in GDP — is caused specifically by the cyberattack on Jaguar Land Rover, the automaker. That cost nearly two billion in losses because of the cyberattack that compromised them and their supply chain.
So this brings it home to you: what are they doing with the data? They can’t do much with it until they’ve got the computing power to be able to. So these things add to… I’m not sure it really adds to understanding — it adds to confusion, adds to worry, probably.
So it’s helping people organizationally, in this context, understand why we need to be prepared for this. And it needs to be, I think, presented in terms they can more readily grasp and understand than is currently the case for what I’ve seen people talk about in quantum computing.
This is a good article, by the way, and I think Zoe Kleinman did a really good job. I’ve read another article — I think it was from Microsoft — where you truly need to have a degree in advanced physics just to understand the article. These are not designed for your average Joe to grasp. There’s a gap.
Shel Holtz: Absolutely. But I think the role of the communicator here isn’t to help people understand how quantum computing works any more than it is with classical computing. Our job is: what are the benefits and what are the risks? What do we need to prepare for? Where do we need to start building that foundation so that when it arrives, we’re ready and not suffering consequences or falling behind our competitors?
So I think that’s the role of the communicator: to say, “Look, you don’t need to understand how it works. These are the things that it’s going to be able to do, and these are the implications for us and our business and our reputation and our competitiveness.”
Neville Hobson: So I see an opportunity here for someone like Lee LeFever to come up with one of his really cool videos that explains in simple terms what quantum computing is.
Shel Holtz: I’ve got to go find myself a good explainer video — see if there is one out there that does a really great job of it. There probably is. Maybe Lee has, for all I know.
Neville Hobson: So, let’s continue on the theme of a computing topic which is not really connected to it, but it’s a similar theme. We’re going to talk about vibe coding and what it means for communication leaders.
Every so often a piece of technology comes along that seems small on the surface but signals a much bigger shift beneath the surface. Vibe coding is one of those moments.
On paper, it sounds like a technical trend: using AI to build software by simply describing what you want in natural language. No coding, no syntax, no engineering background needed. You just talk, and an AI generates a working prototype. Sounds wonderful.
In early November, it was named Word of the Year by Collins Dictionary. Of course it’s two words, but who’s counting? Anyway, it was chosen to reflect the evolving relationship between language and technology and how AI is making coding more accessible to a wider range of people.
This is not a coding story; it’s a future-of-work, future-of-skills, and future-of-organization story.
What makes this interesting for us is not the code; it’s what happens when anyone in an organization can create digital tools on the fly. A business analyst can build a workflow. Someone in HR can automate a process. A communicator can sketch out an app for an event or a campaign — all without waiting for IT.
Suddenly the boundary between people who solve business problems and people who write software starts to blur.
This has real implications for culture and communication. It empowers people in new ways, but it also introduces new risks. AI-generated code is fast, but it’s not always secure, compliant, or ready for production — or even necessarily working properly.
And as we know, when technology becomes more accessible, organizations need a stronger narrative on how to experiment safely, what the guardrails are, and when creativity gives way to rigor.
There is also a shift in skills. According to Cognizant, in the age of AI the most important capability is moving from problem-solving to problem-finding — being able to frame the right questions, articulate needs clearly, and work collaboratively with both humans and machines. That is a communication skill at its core.
So the story here isn’t about developers being replaced or apps being magically created. It’s about how work changes when AI becomes a conversational partner.
And it raises a bigger question: if every team can now build its own tools, what role do communicators play in shaping culture, governance, and the shared understanding of how organizations innovate? Big questions there, Shel.
Shel Holtz: It is a big question. There are big questions there.
I’ve been doing a lot of reading about vibe coding and listening to a lot of podcasts that talk about it. I have been so excited about it, I’ve been working on a proposal — completely unsolicited, no one at my company knows it’s coming — but it is for a vibe-coding training program for project engineers: the entry-level people on the building side of our industry.
Because right now, if they need something — say a dashboard, an app that creates a dashboard that pulls data in from various sources, or that allows you to plug data in and produce charts and graphs and the like — they have to open a ticket, and IT has to create it if they have the time. They’ll prioritize based on the urgency of the things that they’re working on, and you may not get what you want, and it may take a long time.
Now you can just do it yourself.
So I’m very excited about this, especially given the threat that entry-level jobs around all of the business world are facing from AI. They need to be redefined, because entry-level people have to be part of the mix — how do you develop those who are going to move into higher roles in the organization if they don’t start somewhere?
So it’s a rethinking of what those roles are, and enabling these people to create their own apps is one of them.
But now they would still have to submit that app for approval, because if you don’t have expertise in coding you may have done things that you’re unaware of that can create certain risks or problems, or it may stop working at some point. All types of things could go wrong.
I think vibe coding without any foundation in coding is fine for some very, very simple things. I think the more complex it gets, the more of that foundation you need.
While you were talking, I went and looked at what Christopher S. Penn had to say about it, because I’ve heard him talk about it a number of times both in his writing and in the video podcasting that he does.
He thinks that you do — if you’re going to be doing this in a serious way — need to have an understanding of the software development life cycle.
At a minimum, this is what he says: you have to be able to provide detailed instructions and guardrails to the machine. You have to know what you’re doing to prevent poor results, like a vague code request — it’s like asking an AI to write a fiction novel with little information. That would just result in slop, right? Same with code. You have to give it a precise enough series of prompts to get the output that you want.
You need to know not only when the solutions are right or wrong, but also whether they’re right or wrong in the context of the work.
He says best practices for vibe coding require a structured approach that relies heavily on planning, which maps to the Trust Insights Five P Framework — which is really good, go look it up at trustinsights.ai.
This structured method is essential to vibe-code well and includes steps like: spending three times as much time in planning as in writing the code, creating a detailed product requirements document and a file-by-file work plan, and integrating security requirements and quality checks.
And then, of course, if it doesn’t work right the first time, you can keep iterating — but you should have some understanding of debugging and know somebody who does in order to get it to do exactly what you want.
So I think for very simple stuff, yeah, you can just tell it, “Please create me an app that does X, Y, or Z.” But the more complex it gets, the more of a grounding in coding you’re going to need.
Neville Hobson: So that’s where guardrails and guidance and policies and procedures come into play.
But you know what’s going to happen — we saw it with ChatGPT — is that people are going to get hold of the tools to do this and just go ahead and do stuff themselves. That’s what’s going to happen, with the risks inherent in doing that for everything you’ve outlined.
I look at what I’ve done that I suppose you could call vibe coding. What I did on my websites, which run on Ghost — it’s not like WordPress in that you want a theme that you customize, dead easy, build a child copy and all that. Not with Ghost; you’re into the code.
So I used a combination of tools, including the excellent VS Code tools from Microsoft, but also a tool called Docker — astounding, running on Windows. But my “coding partner” was ChatGPT-5. I prompted it with what I wanted to do, and it wrote the code.
We tested it and nothing fell over, except where there were some things like CSS for styling — some dependencies didn’t work for some reason until it fell over and we went back and fixed it.
I was amazed, constantly, by being able to talk to the chatbot in plain English about what I wanted to achieve, and it then proposed how we find the solution to doing that, and then it wrote the code.
I couldn’t do that because I don’t know the code. I would have had to hire a developer or, if I were doing this properly in an organization, file a ticket to get support. I did this myself over a weekend, and I’m still truly amazed. It was an offline copy — all working, everything worked. We packaged it, uploaded it to the Ghost server, enabled it, and the live site just worked. All the changes were perfect; nothing was wrong by that point.
Now, that’s not necessarily the same as building a website for an event, or an app for an event. That would be interesting to see how that would work. So there are levels to all of this.
I think it’s finding the balance between: you have to follow these rigid guidelines if you want to build X for your role in the company, or you want to do something like a website or an app, where the guidelines are not so rigid — they’re still guidelines.
This is designed for a world where content creation and data analysis are becoming everyday skills, as is software creation. Yet I don’t disagree with your take on training at all, nor with what you quoted Chris Penn saying — they make complete sense to me.
But the reality, particularly in enterprise organizations and even more so in small- to medium-sized businesses, is: you’re just going to give it a go and see what happens. Risks and all.
Shel Holtz: Sure. Regardless of whether you’re trying to do something very simple, where you don’t need an understanding of the software development life cycle — you can just tell the AI, “Write me this app,” and if it’s simple enough you’re probably going to get something serviceable — or something more complex.
For the more complex stuff, you need to have a deeper understanding of the output you want, and you have to spend a lot of time planning so that you can give it the right information. It’s not that you sit back in your chair and say, “I think A, B, and C.”
You can work with the AI to develop that stuff, of course, but one thing I do at the end of virtually every prompt — not the really simple stuff like “How many Oscars did somebody win,” but the more complex prompts — is add, “Ask me questions one at a time that you need answered before you give me your answer.” Because it’s going to think of things that I haven’t thought of.
So yeah, I think my point is that whether you’re doing something simple that doesn’t require a lot of upfront work or you’re doing something more complex that does require a lot of upfront work, this is going to speed up the development of software immeasurably and have a big impact on how this gets done and by whom. That represents significant change in the current structures in organizations, I would say.
Neville Hobson: You’ve got it. So this is worth paying attention to as well. Get used to the phrase “vibe coding,” I would say.
Dan York: Greeting Shel and Neville and our listeners all around the worl. This is Dan York coming at you today from Los Angeles, California, where I have been attending an event by an organization called the Marconi Society that has been looking at internet resilience. Basically, how do we keep the internet functioning, uh, in the light of disruptions and things of various different forms?
Great conversations. Great, uh, thinking. I look forward to talking a bit more about it in the future when there’s some things that are useful to share with our listeners, but in the meantime, I wanna talk about chat. Specifically two different platforms. But first, if you’ve been paying attention for a while, you know the chat systems like WhatsApp, iMessage, uh, telegram, signal, whatever.
They’re all their own thing. You can only chat with people in there. Well, in the European Union, the EU passed something that’s called the Digital Markets Act, or DMA that required. Chat systems that operate in the EU to roll out, to have third party integration in some way that you, that other chat systems could interoperate with that.
So we’re starting to see a bit of what this could be with the announcement from Meta that they will very soon be launching third party integration between WhatsApp in Europe with two other messaging systems called Birdie Chat and hiit. Now, if you haven’t heard of them. Neither have I and neither did the writer of the Verge who was putting this together.
But the point, the point is they’re trying to make it so that chat systems could interoperate. We’ll have to see what this means. The important part to me. Was that WhatsApp is actually doing this to ensure that the end-to-end encryption continues to work, that your data can’t be seen by other people when you’re using the messaging system, which for me is critical for the privacy and security of any kind of conversation I’m having.
So that is preserved in the system. So we’ll have to see where that goes. But speaking of end-to-end encryption X, the formerly known as Twitter, which I don’t even use anymore, but I was pleased to see that they are rolling out a new system to replace what we’ve always called dms or direct messages.
They’re rolling out a new system that they call Brilliantly Chat, but it will have video calling. It will also have end-to-end encryption, and it will have other things. It’s coming first. It’s rolling out on iOS and the web, and then it will be coming on to Android, et cetera. So it looks interesting. So there’ll be a new messaging component there.
So for those who are still using X, stay tuned. Your messaging system may be changing around in what you’re looking at, moving to something completely different. Mozilla announced an AI window for Firefox. Uh, there’s been a slate of AI browsers. I think I talked about some last week. There’s last month. I mean, there’s been other different announcements, but this isn’t available yet.
But it appears to be an interesting thing that it would be a, a window that would sort of be separate from your main browsing experience that would allow you to engage with an ai um, assistant basically, while you are. Browsing and stuff. Stay tuned. Again, we’ll have to watch. It’s if you’re a Firefox user, this will be something that you’ll be able to go and work with as you go along.
Another, just switching gears again. WordPress is coming up on its final release for 2025. It’ll be WordPress 6.9. The target delivery date is December 2nd, and it’s got a couple of interesting things. A lot of the release is focused on, uh, a new APIs for developers on performance improvements and things like that, but there’s.
Two interesting parts for people listening to this podcast probably one is that it will be introducing something called notes or block notes that you can be able to add in a similar form to how you might do something with Google Docs, where if you’re editing a doc, you can leave an A note. And be able to go and, and respond to that, reply to it, you know, close it out, whatever else.
This capability is coming into WordPress so that if you’re doing collaborative editing with other people on your team, you would be able to leave notes about this and say, you know, I, I don’t like this text, or whatever, or, you know, that we should really include an image here, and then other people can reply to that and work with it.
This is all visible only within the. Editor interface. So one of the big pushes for WordPress right now is to look at collaboration. And so this is part of that enabling you to be able to work with your team and, and leave notes for each other and be able to work with that and, and use that. So stay tuned.
This is coming out December 2nd with WordPress 6.9. Another interesting piece about in this release. Is the ability to, to very quickly and without a plugin or anything, change the visibility of blocks mostly so that, so that they’re visible in the backend, in the editor, but not out on the front end. And the important part about this is if you’re testing things, if you’re, if you’re working on, on developing a new interface or a new pages and something and you want to try something out, you can get it all ready to go in the editor.
Then you can flip on the visibility, look at it, check it, whatever, flip that off if you don’t want to, or if you’re preparing for a new announcement, you can have all of that ready to go, uh, on a page. And then just toggle the visibility of the blocks. Now, yes, there are other ways you could do this as well inside of WordPress, but this is just a new way that you could work with this in doing that.
So two features I personally find interesting of the upcoming release. The ability to toggle the visibility of blocks and the ability to leave comments if you’re collaborating with other people. Switching to something completely different. Again, if you’ve been listening to me over the years, you know that I’ve been following what’s called low Earth orbit or LEO satellite systems, such as starlink and what they can do out there.
For the last, actually about seven years, one of the other competitors has been Amazon’s, what they’ve called Project Kuper. Now. The, one of the challenges, first of all, is they’ve had issues launching their rockets, but they’re getting there. They’re getting their satellites up, they’re getting ready to offer service.
One challenge has been, people haven’t known how to pronounce it. Is it Kiper Keeper Cooper? What is it? Well, Amazon’s, it was, it was a project name. It wasn’t really meant to be the out there, it was just an internal project name. So they’re solving this by calling it simply Amazon, Leo. That’s what they’re calling it.
Now, what’s I find fascinating is, I mean, first kudos to them. Good to have it. So now we’ll be hearing about SpaceX’s starlink and we’ll be hearing about Amazon. Leo, I find it rather clever because of course people have been talking about satellite systems in leo, which is low earth orbit. It’s a specific range from zero to 2000 kilometers or 1200 miles.
That’s this range, um, above Earth. But now when you talk about LEO systems, you’ll sort of be like, well, are you talking about LEO systems in general or are you talking about Amazon Leo? So. You know, kudos to them for being clever to take that name and, and run with it. So stay tuned, more tech, more things coming out.
With that, they’re gearing up to really launch this service and provide a competitor for starlink. So especially if you’re in an area that has poor internet access, this may be an option at some point soon. Finally. Chat, GPT just announced something, which is that they are making it so that you don’t, it won’t generate m dashes for all those people who work with typography and punctuation who have liked their M dashes.
One thing about chat GPT was that it was putting m dashes in a lot, which are the longer dashes, and it was being a, a signal that something was created by AI or by chat GPT. Well now. You can now turn that off so it’s becomes a little bit harder, but perhaps the people who like using M dashes will be able to start using them again.
We’ll see. That’s all I’ve got for you this month. This is Dan York. You can find more of my audience [email protected] and back to you shall Neville. I look forward to listening to this episode. Bye for now.
Neville Hobson: Thanks a lot, Dan. Great report as always.
There was some stuff in there that was really quite interesting. I think the one I would comment on was your last topic — what ChatGPT has done with the em dash.
Shel and I talked about this in a recent episode, and it is quite extraordinary how people get so exercised and excited about how it “indicates without any shadow of doubt” that an AI wrote something, no matter whether you did or not. There are lots of opinions flying about that.
But the thing you mentioned I found quite interesting, that OpenAI has done this so that you can tell the chatbot not to use an em dash and this time it’ll work.
Well, I started doing that about eight months ago in the custom personalization box. One of the things I’ve told it to do is to avoid using em dashes and instead use en dashes, with a space either side. That certainly goes against all the rules of grammar people talk about in terms of how you should use these things, but I like that. I prefer that.
I don’t like em dashes at all — particularly where they touch the preceding and following words in the sentence. It doesn’t look right to me. Yeah, I know, it’s been like that for centuries, I know all that. But they did that.
So I thought, okay, does that mean my personalization command will actually work properly now? Because sometimes it does, sometimes it doesn’t. I have to keep reminding the chatbot to do this.
What struck me as well is that when OpenAI announced this, both OpenAI and Sam Altman himself, there was no statement about, “Okay, this is what will happen now: if you put an em dash in, it’ll change it to a normal hyphen or an en dash,” or what. No one said anything. And I can’t find anyone with an answer to that question. So that’s still the question: what’s going to happen?
Shel Holtz: Yeah, I’m of mixed mind on the whole dash controversy. I’ve been using dashes as a writer for more than 50 years. I started using them extensively when I was setting type as a part-time gig in college, and a lot of the technical manuals that I was typesetting had dashes in them.
The reason — and I’ve said this before on the show — the reason AI is using dashes is because in all of the stuff on the web that it hoovered up as part of its training model, there were lots and lots of dashes that humans created.
I have no issue with an em dash. I think this whole attack on the dash is absurd, and it’s from people who don’t know punctuation.
On the other hand, I look at a lot of outputs I see from AI — and I’m not talking about stuff I plan to use in publications, just answers to questions or research that I’m going to factor into, say, a proposal — and I see the dashes misused. They’re put in places where commas belong.
So from that perspective, yeah, I’d rather do the placement of the dashes or en dashes myself. I mean, I don’t remember what the rules were, but there used to be rules around when to use an em dash and when to use an en dash, right? I think those have largely fallen by the wayside.
Neville Hobson: Yeah, no, there still are rules — largely ignored by my use completely against those rules.
But I find it’s a very good point you just made, because when I write — and this I find quite interesting — I’ll write a piece of text, say a first draft of an article, and I’ll run it through the chatbot to give me its opinion. And it will often “correct” commas and put en dashes in instead.
And I think: is this an American thing, or is it the kind of bastardization of the English language generally, that things are changing with variants of how people use the language, so it’s hard to know what correct syntax is now?
It doesn’t matter, in my view, as long as people understand what you’re trying to convey. Yet I recognize equally that to many people it is of significant importance. So this is not an argument that’s going to stop anytime soon, I don’t think.
Shel Holtz: No. And the other thing is, along with everything that was contained in the training sets that the models used, so were the rules of grammar and punctuation. So I suspect at some level it’s actually using them correctly, but not the way we use them in current modern English.
And that’s why I will change a lot of them to commas if I’m going to extract something from AI output and use it in a proposal or in a research document.
Neville Hobson: So I suppose people like authors — and others, not just authors, but anyone who feels strongly about using dashes and who uses ChatGPT — I would say to you: put in a custom personalization line that tells the chatbot to use dashes, not take them out.
Shel Holtz: Yes, that’s absolutely right. And especially in technical documents now, because that’s where I saw most of them.
I want to give a shout-out to Robert Rose over at the Content Marketing Institute, among other ventures. This Old Marketing is a great podcast — Robert Rose and Joe Pulizzi. If you’ve never listened to that, I highly recommend it.
Robert has written an article called “Why AI Idea Inflation Is Ruining Thought Leadership and Team Dynamics.” And if you lead a content team, it probably feels less like a think piece and more like a documentary.
His core point is pretty simple: gen AI has made it incredibly easy for senior leaders and subject matter experts to generate ideas for content. Not thoughtful, worked-through concepts — just lots and lots of “We should do something on this”-type ideas. It’s like we turned content strategy into Netflix. There’s always something new in the queue, but more often than not, you don’t feel great about picking any of it.
This isn’t hypothetical. The Content Marketing Institute’s latest B2B content marketing trends report found that 95% of B2B marketers now say their organizations use AI-powered marketing applications. Ninety-five percent — that’s pretty much everyone.
And going back a bit, a previous CMI study found 72% were already using gen AI, but 61% of their organizations had no guidelines for it.
So we have this perfect storm: nearly universal use, very little governance, and leaders with what Robert calls “idea superpowers” that they didn’t earn the hard way.
You’ve probably seen this movie inside your own organization: an executive spends a weekend playing with ChatGPT and asks for “20 provocative points of view we should publish this quarter.” And Monday morning, your content Slack channel lights up with screenshots. None of these ideas are attached to actual budget, resources, or strategy — but because they came from the corner office and because they looked polished, they land on the team like assignments.
Robert’s argument is that this idea inflation doesn’t just create more work; it erodes trust between leaders and content teams. The strategists and writers become order-takers, constantly reacting to an AI-fueled idea fire hose instead of shaping a coherent editorial agenda.
Over time, resentment builds. Leaders feel like, “I keep bringing you ideas and nothing gets done,” while teams feel like, “You keep throwing spaghetti at the wall and calling it thought leadership.”
The data backs up that this isn’t just a workflow annoyance — it’s starting to show up in audience behavior. One study from last year, from Bynder, found that about half of customers say they can spot AI-written content, and more than half say they disengage when they suspect something was generated by AI. We referenced this earlier, Neville.
Another study published this year looked at brands using gen AI for social content and found that overt AI adoption actually led to negative follower reactions unless it was blended carefully with human input.
So the idea treadmill doesn’t just burn out your team; it risks flooding your channels with content that audiences increasingly mistrust.
At the same time, we’re seeing a massive shift on the supply side. Axios, working with Graphite, reported that the share of articles online created by AI jumped from about 5% in 2020 to nearly half — 48% — by mid-2025.
In other words, the content universe is experiencing its own inflation problem: a lot more stuff, not a lot more meaning.
So where does that leave content marketing leaders? Robert’s prescription — and I think this is where communicators really earn their pay — is not “turn the AI off.” It’s to reassert our role as editors of the idea layer, not just the content layer.
That starts with reframing the relationship with your thought leaders. Instead of treating every AI-generated list as a backlog to be cleared, treat it as the raw ore. You sit down and say, “Great, let’s pick one of these and go deep. Which of these ideas would you still fight for if AI hadn’t made it so easy to generate 20 others?”
This is where the leadership part comes in.
The CMI 2026 Trends Report — yes, we’re at the point where we’re looking at 2026 trends — makes the point that the teams who are winning aren’t the ones shouting “AI” the loudest; they’re the ones doubling down on fundamentals like relevance, quality, and team capability, and letting AI breathe more life into those efforts.
In practical terms, what does this mean?
It means putting a simple idea filter in place. If an idea doesn’t align with your documented content mission, target audience, and a defined business goal, it doesn’t make the calendar — no matter how clever the AI prompt was.
It means creating a shared point-of-view backlog where leaders can park AI-assisted concepts, but agreeing that only a small number graduate into actual content every quarter.
And it means being transparent with your team about volume: “We’re going to say no to more ideas faster, so we can say yes to a few that matter.”
There’s also a morale decision here. Other research shows a weird tension: a majority of marketers say AI makes them more productive and even more confident, but a lot of them also fear it could replace parts of their role.
If you’re leading a content team, how you handle idea inflation becomes a signal about your priorities. Are you using AI to respect people’s time and focus on better work — or are you using it to flood them with tasks they can never realistically complete?
And while Robert’s article is squarely aimed at content marketing, I don’t think it stops there. The same dynamics are starting to show up in internal communications, executive comms, even investor relations. Anywhere a leader can spin up “10 talking points for our next town hall” with a prompt, you’re going to see this idea inflation in practice.
If communicators don’t step in to slow that down and curate it, we risk overwhelming every stakeholder group with more, faster, shallower content — and training them to tune us out.
So I read Robert’s piece less as a complaint about AI and more as a leadership challenge. In a world where ideas are cheap and infinite, can you be the person who protects your team, your audience, and your brand from inflation?
Neville Hobson: Yeah, it’s a very good piece. I agree. I’m not sure I like the phrase “idea inflation” — it sounds pretty gimmicky to me, I must admit — but it does capture it quite well.
I found it really interesting reading Robert’s article where he talks about “why AI feeds the engagement crisis.” Now that’s a phrase I can get my head around: engagement crisis.
There are some to-the-point views here which make you think, “Absolutely right.”
For instance, he says when people hear the phrase “employee engagement” they tend to picture enthusiasm — people who are motivated, satisfied, and inspired by their work. But he says employee engagement means more than how people feel about their jobs; it’s also about how much meaning they find in the relationships that shape their work, and AI is causing those relationships to fracture.
“The dynamic between marketing leader and content practitioner, once a creative dialogue, has become transactional. The leader produces ideas, the practitioner packages them.” That’s in line with what you were saying. “Each side feels overextended, underappreciated, and increasingly indifferent.”
And I like how he progresses the thinking here, because you can picture this. “Nobody challenges ideas anymore because nobody loves or hates them enough to care about getting them right.” That’s a hell of an indictment, but I think it’s quite spot-on about some of the behaviors that are happening.
“In that scenario,” he says, “there is no right. When the origin of the idea and the expression of it both come from a machine, neither side can recognize originality or craft when they see it.”
These are alarm bells, to me, that are ringing — that leaders need to really pay attention to.
It’s difficult, though, because it reminds me of two or three cartoons I’ve seen recently on LinkedIn — different, but broadly the same — which show a kind of flowchart of an idea for a press release or an announcement you need to make, and it needs to be this.
So it goes to the next step; then someone says, “Actually, we need to make sure we include that.” Okay, fine, you do that. Then the CEO chimes in with six things he reckons need to be in there as well. And so it goes around all these various steps until you’ve got this bloated thing that gets to the final point in the cycle, with the person at the top of the loop saying, “This is terrible. This is rubbish. This isn’t telling a story. We need to be a lot simpler than this.”
The communicator — by the way, the smart person in this story — had kept the original draft: short, concise, three bullet points. So she submitted that, and everyone said, “Oh, that’s what we need; we’ll approve that.”
To me, that’s a great analogy for this. But someone’s got to recognize the bloat — the inflation of ideas, let’s say — that arises in situations where you’ve got so many people who’ve got their own stakes in the ground and their own agendas they follow.
This isn’t a criticism; it’s a recognition of reality in organizations.
So the communicator in that little story I just told was the smart person here. You’ve got to navigate this sort of thing when it happens. The marketing folks who get dumped on from the corner office — someone at that stage has got to recognize the likely trajectory of all of this and plan accordingly, so that when it gets around to the top of the circuit, they go back to the smarter idea.
It sounds easy saying that, doesn’t it? In reality, it’s not quite like that. Nevertheless, this is a leadership issue. This is not a marketing or content issue — this is a leadership and management issue, it seems to me.
Shel Holtz: It is, and I think it’s an opportunity for communicators to demonstrate some leadership. Because, as Robert says, it’s really easy for an executive over a weekend to get a model to produce 20 ideas for “thought leadership.”
That’s not thought leadership. We’ve talked about thought leadership fairly recently on FIR. This is subject matter expertise that brings new thinking, new angles, sheds new light on the situation. You have a unique perspective to share — that’s thought leadership. It’s generating content that makes people go, “Wow, I hadn’t thought about it that way before; now I’m thinking of it.” You’re leading thinking — that’s what thought leadership is.
I’m not sure that, “Here’s a list of 20 things Gemini came up with for me,” is anywhere near thought leadership unless you see one that you actually have unique perspective and expertise on. And you say, “That’s a great one to talk about.”
If that’s what you’re using AI for, great. But if you’re just copying and pasting that list and sending it to your communications team and saying, “Write these, these are thought leadership pieces,” that’s just going to erode trust in that leader and the organization they represent.
I’ve got no issue with generating lists in AI models — I do it all the time.
On our intranet we have a “Construction Term of the Week,” and I exhausted the list that our engineers sent us, and they’re not inclined to add more. So now I’ll say, “Give me 20 terms related to MEP” — that’s mechanical, electrical, plumbing — and I’ll pick one and that’ll be the term of the week. That saves me a lot of research. It’s a great use of AI, I think.
But if I were to say, “Give me 20 ideas for thought leadership that I can propose to my CEO so we can get a thought leadership article up on LinkedIn,” man, I would never do that. That’s a terrible idea — but evidently, lots of people are.
Neville Hobson: Plenty to think about here. So let’s move on to another topic with plenty to think about.
Question: is it okay to use AI-generated images for LinkedIn profiles?
Over the past few months, we’ve seen AI-generated headshots spreading across LinkedIn. I certainly have. The ultra-polished portraits with perfect lighting, perfect posture, and, in many cases, a slightly uncanny resemblance to the person they represent — these are typical. That description defines most of what I see.
Your first thought when you see it is, “They’re clearly AI-generated,” and you don’t necessarily have a critical take on it, but you note it.
I have to admit, I tried this myself recently — a few months ago, in fact. For a while, my own LinkedIn profile featured an AI-generated photo. It looked professional enough with uncanny realism, but the longer it sat there, the more uncomfortable I felt about it.
It wasn’t quite me. What if people thought it was me and later realized it wasn’t? I hadn’t said it was an AI-generated image created from a selection of actual photos of me. What would the effects be?
Eventually, I removed it and used a real photo.
That personal hesitation is exactly what Anna Lawler explores in a thoughtful LinkedIn article about the ethics of AI headshots. Lawler is Director and Head of Digital and Social Media at Greentarget, a corporate comms agency based in London.
She describes the pressure to have a sharp, executive-style image ready the moment a new role is announced — something many of us will recognize. AI offered her a quick, clean solution. But then came the real question: should she use it?
Her piece gets to the heart of what communicators are wrestling with right now — well, many communicators, I would say.
What does authenticity look like when technology can generate a version of you that is polished and accurate, but still artificial? Does using that image strengthen your professional brand, or does it introduce a small crack in trust?
What if you don’t disclose how the image was created? And does it matter if no one can tell the difference?
Anna’s LinkedIn post attracted many comments about whether to do this. One comment was blunt: “Just no. Not at all. Never.”
Another explored the idea a bit: “You’ve used an AI image of yourself which looks dead like you — so much that your dad couldn’t tell the difference, other than to say you look well. How different is this to putting a filter on a real photo of you? So no major harm done for a personal LinkedIn photo. But what happens when PRs and marketers start doing this on behalf of others?”
Another analyzed the situation: “Most images — portrait or otherwise — are subject to some form of post-production. It is similar to editing a paragraph of text. You take the original content and adapt it to fit the requirements of the medium, ensuring the tone and voice are appropriate. In the case of a photograph, a human may use Photoshop. In the case of text, they can do it in Word or use Grammarly. If the final decision of whether or not to accept the edits lies with the human, does it matter what method was applied to make them?”
If the purpose of a profile photo is to represent who you are, does an AI-enhanced or AI-created version cross a line? Does “close enough” count?
Anna makes a thoughtful distinction between personal use and corporate use — on websites or official materials, where misrepresentation risks are far greater. She also highlights the reputational and ethical factors that communicators must now weigh, because our profile photos are no longer just photos. They signal identity, credibility, and intent.
It raises a bigger question for all of us: as AI becomes more deeply woven into our professional lives, where do we draw the line between convenience and authenticity? And how do we guide our organizations through those decisions when the norms are still being formed?
Now I know you’ve got some views about this, Shel, so what do you think?
Shel Holtz: Hell yeah, I have some views on this.
I’ve stated before on the show and elsewhere that I think the line is around deceit. Are you trying to deceive somebody? And if the use of AI could lead somebody to be deceived, then I think you need to disclose. If not, I don’t think there is any compulsion to disclose.
What if I have a photo of me — and that’s what I use — but I use a service like Canva or Photoshop to remove the background and put in an AI-generated background? Is that okay?
AI is a tool. It’s just a tool.
We use tools for… I mean, we use photos — there was a time when there were no photos available. You had to hire an artist to paint your portrait if you wanted somebody to know what you looked like.
I think the utter prohibition that some people are suggesting on AI images on LinkedIn is, frankly, stupid. I disagree with it wholeheartedly.
My profile picture on LinkedIn is AI-generated. Now, why did I do that?
When I started at Webcor in 2017, there was a professional photographer who was taking everybody’s photo, so your profile photo on the intranet directory was consistent and professional. I used that everywhere for about six and a half years. Then I lost 70 pounds, and frankly, I didn’t look like that anymore.
I didn’t have access to a professional photographer through work, and I didn’t have the time to go sit for a portrait. So I did that thing where — and I didn’t use one of the paid services; I think it was Gemini — I gave it 20 headshots of me looking the way I do now, post-70-pound loss, and I said, “Aggregate these into a professional headshot.”
I had to do it eight or 10 times before I got one that actually looks like me, where you can’t tell the difference. And that’s the one I’m using.
Is it misrepresenting me? No, it’s not. It looks like me, and I am fine with that. I don’t think I’m deceiving anybody. I don’t think I’m pulling the wool over anybody’s eyes. It’s me.
I don’t have any issue with that at all, and I can’t imagine an argument that would convince me otherwise.
Neville Hobson: No, I get you 100% on that, Shel.
In my case, I mentioned I had an AI-generated image as my LinkedIn profile picture, which I removed. There’s now a normal shot; it’s not as good, in my view, as the one I took down.
But that same picture, large size, I’ve got on my About page on my website. And there’s nothing there saying it’s AI-generated. People I’ve shown it to — only about four or five — couldn’t tell the difference that it wasn’t real when I told them it was AI-generated.
So your point about deceit is a very valid one.
If I put a picture of me up there looking slightly thinner maybe, with fewer age-driven gray hairs appearing, and I made myself blond maybe, changed my eye color or something — that’s not me at all. That, to me, would cross the line.
But I also, on the other hand, get entirely the illogic — if I can use that word — of people who are critical about this. But that is part of the platform you’re on, and people will judge you.
Now, I’m of strong belief myself that I really don’t care much about what people think about me in the sense of that, but this can have impacts.
I don’t want to do something that stimulates that kind of discussion or opinion-forming or commenting. And people are doing that a lot.
So to me, it’s simple: this is not a huge deal, to have an AI-generated image up there, when I can just have a normal pic that I take with my webcam and touch it up in Photoshop — which I do. I had one previously where I changed the background because I didn’t like the background.
That happens all the time. That’s not deceit. Nevertheless, there are some things you might want to take a stand on. This isn’t one of them for me — “I’m not going to use it” or “I am going to use it.”
So why have I kept it on my blog, you might ask? That’s part of a simple experiment. No one’s noticed or commented, and it actually fits the way I want to portray myself in the context of what I’ve written about myself on that page.
Shel Holtz: Thematic consistency.
Neville Hobson: Yeah, that’s different from using it on LinkedIn, because that’s a wholly different description on LinkedIn. So I’ll keep it up there until someone screams loud enough, saying, “You’re a fake, you’re deceitful,” which I don’t believe is going to happen.
Shel Holtz: The camera is a tool. A photo of you is not you; it is a representation of you that was captured by the camera. What if the white balance was off? What if the depth of field was off? There are so many things that a camera captures that are inaccurate or inconsistent.
AI is a tool. In five years, no one’s going to be having this discussion. It’s going to be so common, and the outputs are going to be so spot on that this isn’t even going to be an issue.
I just think if people are talking about this, they need to find more fruitful things to spend their time talking about.
Neville Hobson: This is always going to be here, and it depends on how you want to judge it.
But to me, there’s another thought to throw into the mix here, which we’ve touched on previously: this is not just about a photo. It has more about it than just a photo of someone. This is about your identity. This is about your credibility. This is about how others perceive you. That does matter — to varying degrees, depending on the industry you’re in and how you portray yourself and the people you’re connected with.
So it’s a preview, I suppose you could argue, of wider ethical decisions that we must make as AI is embedded everywhere — until it gets to the point, as you say, where no one’s talking about this anymore. We’re not at that point yet.
Shel Holtz: Maybe I’ll take my LinkedIn portrait and have the AI generate it in the style of a Pixar 3D animated movie and see what people say.
Neville Hobson: Well, you used to have a cartoon up there back in the early days.
Shel Holtz: I did. That was a service that would take your photo and turn it into a cartoon, an illustration. It was a service that used freelance artists. They would parcel it out to one of them. It was pretty cheap; you got it back in multiple file formats. It was great.
Neville Hobson: There you go.
I think I can answer my own question about why I’ve kept it on my blog, because the blog serves multiple purposes. It’s no longer a business site — I’ve changed what I do. It’s much more a personal site that’s intermingled with business. That’s different to LinkedIn, which is a social network with a business focus — that’s different. So that’s why I keep it up there, I guess.
Shel Holtz: All right. So if an executive has their photo taken and they have a makeup artist work with them, is that an accurate representation of them? Do they need to disclose that they were wearing makeup for this photo?
Come on. Let’s talk about more serious things, folks.
Neville Hobson: Like I said, logic is not part of this discussion; it’s emotion-driven. This is again a reflection, I think, of accessibility to ways to voice your opinion if you have one — and everyone has one, and they are voicing it.
Shel Holtz: Clearly. Well, let ’em.
Neville Hobson: I say thank you to Anna Lawler because that prompted this. She wrote the piece at the beginning of the year, but it did prompt all of this in my mind. I think it’s worth reading, so there’ll be a link to it in the show notes.
Shel Holtz: Well, I read an article recently with a pretty brutal headline: “Your Staff Thinks Management Is Inefficient. They May Have a Point.” This was in Inc. magazine.
It’s just the latest in a long string of big changes that employees feel are being done to them rather than with them.
The article by Bruce Crumley leans on new data from Eagle Hill’s 2025 Change Management Survey. In the past year, 63% of U.S. workers say that they’ve been through significant change: tech like AI, new products, return-to-office shifts, headcount changes, cost-cutting, cultural changes, acquisitions. But only a third of them think those changes were worth the effort.
A lot of them say their efficiency actually went down, their workload and stress went up, and the supposed innovation never really materialized.
Now, when Eagle Hill digs into the “why” around this, the picture gets even more familiar. Employees say management is picking the wrong priorities, not managing the rollout well, not supporting people as they adapt, and not monitoring how the change actually lands.
Only about a third feel leaders really listen to their input on what needs to change. Forty percent say they’re basically ignored.
The line that jumps out for communicators is Eagle Hill’s conclusion that the key to successful change is not what you change, but how you change — and that change is experienced at the team level, not somewhere on the org chart.
Now, layer AI on top of that. From the employee perspective, there’s a pretty consistent story emerging: they’re interested in AI, but they don’t feel included or supported.
Eagle Hill’s tech and AI research found that 67% of employees aren’t using AI at work yet, but more than half of those non-users actually want to learn about it. At the same time, 41% say their organization isn’t prepared for the rise of AI.
Workday’s global survey paints a similar picture. Only about half of employees say they welcome AI in the workplace, and nearly a quarter aren’t confident their organization will put employee interests ahead of its own when implementing it.
Leaders are more positive about AI than employees are, but they share that same lack of confidence about whether the rollout will be done in a people-first way.
And there’s a trust gap on top of that. Gallup finds only 31% of Americans say they trust businesses to use AI responsibly. Over two-thirds say “not much” or “not at all.”
Let’s make it even spicier. A recent global study from Dayforce found that 87% of executives are using AI at work compared with just 27% of employees. Execs are out ahead, using AI heavily, while a big chunk of the workforce is still on the sidelines — worried, undertrained, or just not invited in.
So if you’re an employee sitting in the middle of all this, what does it look like?
You see leadership trumpeting AI as the future. You get more tools, more dashboards, more “transformations,” as they call it. Your workload goes up during rollout. Your voice doesn’t seem to shape the priorities. And you’re told it’s all about efficiency and innovation while your own day-to-day experience feels more chaotic.
“Management is inefficient” starts to sound like a very reasonable conclusion.
That’s where communicators can earn their keep, especially around AI.
First, we can make the “why” legible. A lot of AI change stories stop at “This is cutting-edge” or “This will make us more efficient.” The Eagle Hill findings are basically a giant flashing sign that says that’s not enough.
We need to tell a story that starts with the team: What pain point is AI solving for you? What are you going to stop doing because this is now available? What does success look like in your specific function, not just on an earnings slide? Helping leaders anchor AI messaging in outcomes people actually care about is step one.
Second, we can bring employees into the design of the change rather than just leaving them on the receiving end. That means building in genuine listening — pulse surveys that ask, “What’s getting harder as we roll this out?” Small-group sessions where teams can talk about how the AI actually fits into their workflow.
Storytelling that highlights not just the shiny pilot, but the tweak that came from frontline feedback. And then — and this is the part we skip so often — closing the loop and saying, “Here’s what you told us, and here’s what changed.”
Same as surveys, right? We issue surveys, we get the feedback, and maybe changes are made — but we don’t tell anyone. If 40% of people feel unheard during change, that loop is our job.
Third, we can equip managers to be translators instead of amplifiers of confusion. Most people don’t experience “the organization”; they experience their manager. So when Eagle Hill says the team should be the core unit of change, that’s a giant invitation to communicators to build manager toolkits around AI.
Simple talk tracks: “Here’s how to explain this change in two minutes.” “Here’s what to say if people are worried about their jobs.” “Here’s how to be honest about the short-term workload bump.”
FAQs, slides, even suggested phrases that sound human instead of legalistic — that’s all in the comms wheelhouse.
Fourth, we can push for pacing that matches reality and help leaders talk about trade-offs. A lot of the resentment in these surveys comes from people feeling like change is something piled on in addition to their regular day jobs.
Eagle Hill’s advice to slow down, phase changes, and temporarily ease workloads isn’t just an HR tactic; it’s a narrative opportunity.
Imagine the difference between: “Here’s another AI tool, please adopt it,” and: “For the next eight weeks, we’re pausing X reports and Y meetings so you have time to learn this new workflow. Here’s the schedule. Here’s where to get help.”
We communicators can frame that pacing as a deliberate, respectful choice.
And finally, we can insist that AI change stories include trust as a first-class citizen, not a footnote. That means naming the concerns, not dancing around them.
Employees are reading headlines about bias, surveillance, job loss. They’re seeing that most people don’t fully trust businesses on AI. We can help leaders say out loud, “Here are the guardrails; here’s what we will use AI for, and here’s what we will not. Here’s how we’ll measure the impact on workload. Here’s how you can challenge a decision if you think AI got it wrong.”
That transparency is the only way to close the trust gap.
If we don’t do any of this, AI just becomes the latest exhibit in the “management is inefficient” file — another transformation employees experience as stress without payoff.
If we do our jobs well, AI can actually become a proof point that this time, the organization learned from the last wave of change — that it listened, it paced itself, it treated teams as the unit of change, and it used communication as a way to share power, not just spin the story.
Neville Hobson: I have to admit, I’m quite shocked to hear the picture you’ve painted there — that it’s so bad. Is it truly that bad?
Because this is actually, to me, like what you just said, particularly your concluding part — this is Leadership 101, for Christ’s sake, and yet so many people aren’t doing this.
Shel Holtz: Well, if the research is accurate, then it really is that bad.
Neville Hobson: What the hell is going on?
This actually touches on everything we’ve said so far in this episode — what leaders need to do in certain situations. Don’t allow it to be like this.
The whole idea of “management” being all up to speed with AI while employees are completely in the dark and don’t have a clue how to use the tools — I find that truly hard to believe as a significant factor across the board.
That doesn’t gel with some other research I’ve seen here in the UK — and mostly in the US — where the issue is getting leaders to embrace it, while employees are out there experimenting, which is why there aren’t guardrails or guidance properly.
So this is a pretty shocking state of affairs, it seems to me, Shel.
Some of the things here are so obvious that I just wonder why people enable this situation to be the norm, if it is as portrayed in this article.
There are a lot of tips though — I have to say everything you need to know about what to do is here. So pick this up and read it, for God’s sake, please.
Shel Holtz: I remember early in my career, I was at a Ragan Communications conference and a CEO was speaking. He said he believes that every CEO, as soon as they sit in the CEO chair, gets hit by a “stupid ray” aimed right at their head — because they stop listening.
They think, “I’m the CEO. I’m here because I know everything, and I can make these decisions in a vacuum. I am at the top of the food chain.”
I think that’s happening right now. If you look at the number of layoffs that are happening, and AI is a factor in these — they’re coming right out and saying it. They’re not hiding it; they’re saying, “AI can make us more efficient.”
They’re not talking to the teams that do the work, to find out, “If we end up with three people instead of 10 because you think AI can do the work, we happen to know that’s not the case, and this is going to make us less efficient.”
There’s not a lot of listening going on in these decisions being made. There’s not a lot of querying of the teams to find out exactly how they can use AI to be more efficient and what that means for the staffing of the team.
I think there are executives who say, “I have this tool, I’m in charge, I’m slashing the workforce.” I think that’s what’s happening. And I think that’s why so many employees think that the leaders are now inefficient.
Neville Hobson: Well, it’s missing completely the voices that — as we’ve discussed in previous episodes, and indeed thinking back to our interview with Paul Tighe from the Vatican — it’s missing the humanity.
It’s missing, “How does AI augment and improve how people do their jobs, not replace them? Not ‘become more efficient, therefore we don’t need so many people here.’” That voice is missing.
To me, that’s the essential part. You could extend that thought: that voice is not just about AI, although that’s a huge element because it is permeating organizations, and in many cases not in a good way, because the conversations are all about becoming more efficient and not needing people. It’s part of that bigger picture.
This article does talk about, in its concluding parts, “Change is experienced collectively, not individually.” That means a team, not the org chart, must be the core unit of change.
They talk prior to that paragraph about how the majority of modern workplaces are shaped by the teams that drive most activity and success. Initiatives come from the top, but success relies on the base embracing them. These are the fundamentals of leadership, surely.
I’ve noticed here — as an aside — that in some of these things you hear about that are going wrong in organizations, the people leading are just damn incompetent. Some of the speeches and things they say in public exhibit nothing but utter incompetence, and they should be fired.
That’s a bigger story, frankly, but it’s part of it. The most useless people leading these organizations are dragging them down, and the employees are the ones who are suffering. I’m straying into big-picture politics and opinions, but nevertheless, that’s what you see.
Shel Holtz: I’ll join you in that string.
It seems clear to me that a lot of leaders are abdicating the principles of leadership to the exuberance they feel about the potential for AI, and they’re just running with it. I don’t think that’s going to bode well for the performance of their organizations, especially when they’ve lost the trust and confidence of the people who are expected to execute on all of this.
Neville Hobson: So on that note, we would say that the next episode will have a lot of good news.
Shel Holtz: I sure hope so.
But that’ll be a 30 for this episode of For Immediate Release. Our next long-form monthly episode — we will return to doing this toward the end of the month. We’re planning on recording that on Saturday, December 27th, and releasing it on Monday, December 29th.
Until then, go back to the beginning of the episode and learn about all the ways that you can comment. And we will have our midweek short episodes beginning in a week or so.
And until then, that will, in fact, be a 30 for this episode of For Immediate Release.
The post FIR #489: An Explosion of Thought Leadership Slop appeared first on FIR Podcast Network.
For the second year in a row, Coca-Cola turned to artificial intelligence to produce its global holiday campaign. The new ad replaces people with snow scenes, animals, and those iconic red trucks, aiming for warmth through technology. The response? A mix of admiration for the technical feat and criticism for what some called a “soulless,” “nostalgia-free” production.
Shel and Neville break down the ad’s reception and what it tells us about audience expectations, creative integrity, and the communication challenges that come with AI-driven content. Despite Coke’s efforts to industrialize creativity — working with two AI studios, 100 contributors, and more than 70,000 generated clips — the final product sparked as much skepticism as wonder.
The discussion explores:
Why The Verge called the ad “a sloppy eyesore” — and why Coke went ahead anyway
The sheer scale and cost of AI production (and why it’s not necessarily cheaper)
Whether Coke’s campaign is marketing, corporate signaling, or both
How critics’ reactions reflect discomfort with AI aesthetics in emotional brand spaces
Lessons for communicators about context, authenticity, and being transparent about “why”
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, November 17.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson:
Hi everyone, and welcome to For Immediate Release, episode 488. I’m Neville Hobson.
Shel Holtz:
And I’m Shel Holtz. Coca-Cola is back with a holiday spot created using AI for the second year running, and the blowback is about as big as the media buy.
If last year’s criticism centered on uncanny humans, this year they tried to sidestep that by leaning into animals, snow, and those iconic red trucks. The problem is that a lot of viewers still found the whole thing visually inconsistent and emotionally hollow — more of a tech demo than Christmas magic.
The Verge didn’t mince words, calling it a “sloppy eyesore.”
This wasn’t a lone creative prompting a model in a dark room. According to The Verge, Coke worked with two AI studios — SilverSide and Secret Level — involving roughly 100 contributors. So when people say AI is taking work away from humans, this example complicates that argument. The project generated and refined over 70,000 clips to assemble the final film, with five AI specialists dedicated to wrangling and iterating those shots.
If you think of AI work as cheap and easy, that scale tells a different story. This was massive, industrialized production. Despite all that, audience reaction has been harsh. Delish collected consumer responses labeling the ad “soulless,” “nostalgia-free,” and — my favorite phrase — “intentional rage bait.” In other words, people felt provoked, not moved.
The general sentiment is familiar: “Just bring back the classic trucks or polar bears and let real filmmakers work their craft.” The level of blowback reflects a mainstream discomfort with AI aesthetics invading a beloved ritual.
So why is Coke doing this again? Partly for speed and efficiency, sure — but the more interesting rationale is signaling. As Forbes argues, this isn’t just marketing, it’s corporate communication: a message to investors and partners that Coke is a modern operator experimenting across its value chain. In that sense, the ad is a press release in moving pictures — “We’re innovating.”
Whether consumers cheer or jeer, the signal still gets sent.
For communicators, I see three takeaways.
First, scale doesn’t guarantee soul. You can throw 100 people and 70,000 clips at a film and still end up with something that feels off. Craft and continuity remain stubbornly human problems, and current video models still struggle with temporal consistency and art direction.
Second, context beats novelty. Holiday ads are about rituals and memories. When the urge to adopt AI clashes with audience expectations for warmth and authenticity, “innovative” can come across as “indifferent.” If you’re going to bring AI into sacred brand moments, you need strong creative guardrails — and maybe keep flagship storytelling human-first until the tools catch up.
Third, be explicit about your “why.” If your real audience is Wall Street or prospective partners, say so — ideally without sacrificing the consumer experience. Coke’s narrative of blending human creativity with new tools can work, but only if the end result still feels like Coca-Cola. Otherwise, you’re asking consumers to bankroll your R&D with their attention during the most sentimental time of the year.
These trucks will keep rolling — and so will the debate — until the models solve for continuity and feel. Brands risk trading wonder for workflow, and audiences know the difference.
That said, I watched this ad last night during Monday Night Football. Looking at it through that lens, I didn’t see what the critics were talking about. I suspect most of the audience didn’t either. The vast majority probably aren’t aware it was generated with AI and didn’t see any problem with it. I think the hypercritical responses are mostly from people who are following the AI conversation closely — and maybe looking for an excuse to slam something that wasn’t made by human creators.
Neville, what do you think?
Neville Hobson:
I watched the video on YouTube — both the global version and the one Coca-Cola uploaded for European audiences. Honestly, I couldn’t tell the difference. They’re exactly the same length. Like you, I thought it was well done.
It was pretty clear to me within a few seconds that it was AI-generated — not because it looked AI-generated, but because of the scale and scope. You just know they’d use AI for something like this.
Coke has used this theme for years — the trucks, the snow, the feel-good singing. This time, there aren’t any humans front and center; it’s all animals. But as storytelling, I thought it worked.
That said, I did see some severe critiques, particularly from design industry voices. Creative Bloq, for example, called it an example of “how a company risks decades of hard-won brand equity through the use of nascent tech that’s still not up to the job.” I think that’s a bit unfair and shows a lack of understanding of what Coke was really trying to do.
There’s also a fascinating behind-the-scenes video Coke posted. It’s narrated by AI voices — the same ones from NotebookLM, actually — so it’s an AI explaining an AI. And the prompts they show are incredible: dozens of paragraphs for a single shot. This wasn’t a one-line “make a Christmas ad” job.
That explainer reinforces your Forbes point — this could be as much about corporate signaling as marketing. Personally, I see it more as a brand experiment than a corporate ad, but I can see both perspectives.
And yes, some critics are inevitably Coke detractors. One UK designer, Dino Berberich, posted screenshots showing technical errors — missing truck wheels, misaligned shots, and so on. Maybe Coke fixed those later, maybe not. But if they take that kind of feedback seriously, it’ll be invaluable.
Overall, I think it’s what you’d expect from Coke. Set aside the fact that it’s AI — it’s actually quite good. It continues the “Real Magic” theme they’ve been running for years. I remember one a couple of years ago with paintings in an art gallery coming to life when they got a Coke — also beautifully done.
So this feels like the next step in their evolution. Most viewers won’t realize it’s AI unless they’re already thinking that way. Awareness is growing, but the average person just sees a nice Christmas ad.
Of course, we’re now in a world where people start by asking, “Is this AI?” before saying, “Wow, what a great image.” That mindset can distract from the story — but it’s part of the landscape now. This kind of work will only get better, and Coke is helping to move it forward.
Shel Holtz:
Yeah, I agree. And if you look at Berberich’s LinkedIn post, you can see the issues he points out, but that’s not how most people watch ads. They’re not stopping every frame to analyze wheel placement. They’re watching during a football game or between shows. Most people just see a Coke commercial with some fuzzy bunnies.
One critique I read said the ad couldn’t decide whether it wanted cartoony or semi-realistic animals. I didn’t notice that. If you go in looking to criticize AI, sure, you’ll find something. But again, that’s not most people.
The YouTube comments are full of people saying things like “I’ve never wanted a Pepsi more in my life.” But honestly, nobody’s switching brands because of an ad like this. People drink Coke or Pepsi based on taste, not commercials.
As for Forbes’s point about corporate signaling — I don’t think this was meant as a corporate ad, but rather a way to say, “We’re embracing AI.” And the fact that they released a behind-the-scenes explainer reinforces that. They’re telling the world they’re leaning into this technology, iterating, and getting better at it.
You know, I don’t remember this kind of backlash when animation shifted from hand-drawn to CGI. That shift also displaced artists — the inkers and colorists who worked on traditional cells. This feels like a similar transition.
You still have people giving thought to story, design, and imagery — but the tools have changed. Does it have the same soul as a Pixar film? No. But then, Pixar doesn’t have the same soul as early Disney animation either. Time marches on. Deal with it.
Neville Hobson:
Exactly. And a lot of the negativity is just the nature of online discourse these days. Anything posted publicly attracts critics, trolls, and nitpickers. Among them, there are some valid points, but it’s hard to find them amid the noise.
The explainer video also includes a section showing how Coke evolved its Christmas ads — from sketches to animation to AI-rendered realism. It’s fascinating to see how deliberate that process was. Again, Coke released this publicly, which supports your point: this is about transparency and experimentation.
So yes, critics have a right to their opinions, and some make constructive points. But for most people — what we’d call “Joe Bloggs” here in the UK — it’s just a nice Christmas ad. They’re not thinking about AI strategy.
From real trucks to AI ones, Coke is pushing creative boundaries. Some say they should’ve shot it live and hired more people, but there’s no crime in trying something new. They’re pushing the envelope, and I think they’ve done a pretty good job.
Shel Holtz:
And to reiterate: they did employ people. Two studios, probably dozens of professionals. I doubt they saved much money doing it this way. They’re just moving forward with the technology — and that’s the point.
And that will be a 30 for this episode of For Immediate Release.
The post FIR #488: Did a Soda Pop Make AI Slop? appeared first on FIR Podcast Network.
What happens when the AI conversation turns from a quiet side road into a crowded superhighway? Recently, Martin Waxman — digital strategist and LinkedIn Learning instructor — pressed pause on the churn to make room for curiosity, quality, and quiet. He’s not quitting; he’s recalibrating: publishing less often, thinking more deeply, and reminding us not to let AI do the thinking we should be doing ourselves.
For communicators, that raises bigger questions: When do we slow down? How do we trade volume for value? And what does “good enough” look like when our audiences are drowning in near-identical insights?
Neville and Shel dive into this topic in today’s short, midweek episode of “For Immediate Release.”
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, November 17.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz:
Hi everybody, and welcome to For Immediate Release. This is episode number 487. I am Shel Holtz.
Neville Hobson:
And I’m Neville Hobson. In this episode, we have a story that’s less about technology itself and more about what happens when you pause to think about the pace of it all. A few days ago, Martin Waxman published a reflective piece on LinkedIn called “Knowing Where to Start and When to Make a Shift.” Martin, as many of you will know, is a Canadian digital strategy consultant and LinkedIn Learning instructor. He was our guest in an FIR interview in January. Martin writes the AI and Digital Marketing Trends newsletter, which has become one of LinkedIn’s most successful, now reaching well over half a million subscribers. It began humbly in 2020 when Martin set out to explore the intersection of AI and marketing at a time when few others were doing so. His goal was to reach maybe 10,000 readers—he’s now 50 times past that. But this isn’t a story of relentless growth or scaling up. It’s the opposite. We’ll explore this next.
Martin’s post isn’t a farewell—it’s an adjustment. After years of writing about AI’s impact, he’s decided to slow the pace of his newsletter and make space for deeper thinking. In his words, the AI conversation has moved from a path less followed to a crowded superhighway. Everyone seems to be writing about the same things and the constant noise can be exhausting. So he’s taking stock, reevaluating his direction, reshaping his LinkedIn Learning course, and thinking about how to bring a fresh human perspective to the next stage of this conversation. There’s a generosity and humility in that move that feels rare today. He talks about resisting the temptation to let AI do the thinking for us—stop relying on AI as a crutch, embrace the blank page, don’t give up on your brain. He’s reminding us that creativity and discernment still start with people, not prompts. And he’s choosing to slow down—to step back from the rapid churn of publishing and make room for curiosity, quality, and quiet.
That theme of slowing down connects powerfully with a discussion I led in September in an IABC webinar on redefining what work means. The answer may not be doing more or moving faster, but taking the time to notice, reflect, and realign what we do with what really matters. Martin’s story feels like a practical expression of that—an intentional deceleration that invites us to think more deeply about purpose and pace in our professional lives. As we unpack his post today, perhaps the real question isn’t just about how communicators keep up with AI. It’s about how we decide when to slow down, how to add meaning amid abundance, and what to let go of so our work and our thinking can stay human. With everyone producing AI-related content, Martin’s pivot reflects a move from volume to value. How can communicators preserve credibility and originality when audiences are saturated with near-identical insights? Martin’s post isn’t just about pausing; it’s about reclaiming agency in how we learn, create, and lead with AI. It invites communicators to redefine productivity, not by the speed of output, but by the depth of thought.
Shel Holtz:
I was very taken with Martin’s post and I’m very happy for him that he has come to terms with this and made decisions that will make him happier. It’s not for me, however. I see this notion of slowing down in two ways. One is slowing down so you can take the time to produce quality output. I confess that I use AI to write things I don’t want to take the time to do. They’re not important enough that I need to be the author; they’re fairly rote. I give notes to AI, it cranks out a serviceable first draft, and I take 20–40 minutes to edit and revise rather than spending two or three hours. Nobody has ever questioned the provenance of these pieces. In fact, I’ve gotten praise for some of them, which makes me chuckle. I do this so I can appreciate the blank page and write the things where it’s important that it be me at the keyboard, in my voice.
The other way to look at this is simply slowing down in life. I appreciate people who want and need to do that—and especially people who find a way to accommodate that need. I want to go faster myself; I want more hours in the day to get more done. If I’m not being productive, I feel at loose ends and anxious, wondering what I can be doing now. Just this weekend, after a sleepless night, I wandered into the garage, noticed a box out of place, and the next thing I knew I’d been straightening the garage for an hour. That’s my personality: I have to be doing something almost all the time. So I think you have to look at what’s right for you. Don’t feel pressured to slow down if that’s not going to make you happy.
Martin’s prescription is a good one for people who are overloaded and overwhelmed, and I understand his interest in dialing back commentary on AI when so many are producing similar content. He has a unique perspective—he’s a thought leader and tends to be ahead of the curve—so I’d continue to pay attention to him. But scroll through LinkedIn and two-thirds of the posts seem to be about some angle on AI.
Neville Hobson:
I understand you, Shel. I’ve known you a long time, so I know that anxiousness about not doing stuff. I was like that. Each person is different; this applies to individuals, not a universal formula. The starting point is wanting to change—professionally, personally, or both. I went through that after moving a year ago to a rural part of southwest England from a busy urban area. That change catalyzed others, which is why Martin’s post resonated so strongly with me.
Now I take more time deciding what I need and want to do—not everything must be done right now. I still work late sometimes, but only when I feel like it, not every day. I’ve also been able to make changes and still pay the bills. I spend more time with my grandkids and my wife, take days off in the work week, and sometimes work on Sunday—with purpose. Going back to the IABC webinar on redesigning your work and life, doing things such as what Martin described fits into that bucket. We ran a quick poll at the start: “How aligned do you feel with your current work?” A third (33%) said, “I feel aligned—my work reflects my values and energy.” Sixty percent said, “I’m actively rethinking how and why I work.” Another 7% felt stuck or disconnected from what matters. So only a third felt aligned; nearly two-thirds are rethinking how and why they work, and some feel completely disconnected. In the chat, people said they now feel the need to question things they hadn’t been questioning—and that they aren’t happy.
Not everyone has the luxury to change, but there are things you can control. This isn’t a whole-life overhaul. It can be incremental—small steps. What Martin’s doing, as I interpret his post, is a small-step pivot in how he publishes his newsletter and updates his LinkedIn courses. He says he’s slowing his publishing frequency from twice a month to a less regular schedule—perhaps monthly or whenever something genuinely strikes him. Most comments I’ve seen are supportive.
He offers useful tips. “Stop relying on AI as a crutch” is a good one—don’t use it for the easy way out; use it to push you in unexpected directions. Embrace the blank page. For me, starting and stopping, taking a walk, doing something else often unlocks progress. Another tip I liked: aim for higher quality than before. Do less, take more time, and produce better work. That makes sense—rushing often reduces quality. Of course, that assumes you can do it.
Shel Holtz:
We’ve got deadlines out here in the real world, you know?
Neville Hobson:
You do, and you’ll still have them. But not everything has to be dropped immediately. This gives you empowerment to choose: that report due in ten days doesn’t have to be mapped out right now unless you want or need to. The key is deciding what you want to do about your life—and measuring it in a quantifiable way if that helps. Martin seems to have made an honest assessment of where he’s at, prompting further thoughts for his training and writing. Do you think this represents the visible front end of a trend among communicators choosing depth over frequency—slowing down to produce better quality work? I’m not sure I see it broadly, but I wouldn’t be surprised if it’s behind what some people are doing.
Shel Holtz:
You could connect this treatise with what we’re seeing from employees pushing back on full-time return to the office and preferring hybrid schedules. People want to be with their kids, care for elderly parents, or simply avoid traffic—as I did this morning behind a gnarly accident, hence our late recording. People are reevaluating how they work, and organizations will have to come to terms with it. Look at Glassdoor reviews and a common complaint is being worked to death—long hours and expectations that you’ll put in the time.
There’s another way to interpret this. Martin was on a panel I moderated at an IABC World Conference in Toronto a couple of years ago on AI. Someone asked about using AI to write—is that acceptable? Martin said most of what we do in communications just has to be “good enough.” If AI writes “good enough,” why is that a problem if it frees up time for other work?
The way I choose to interpret Martin’s post is: I don’t have to slow down across the board. I can slow down one thing to make time for another. Example: there are two of us in the communications department where I work, and we were swamped. As people learned what we could do, requests multiplied. Juggling them without letting anything fall through the cracks became hard. My boss suggested two things. First, open a ticket system. We use ServiceNow, so now anyone who calls with a request is asked to open a ticket. We can better manage work, keep notes on what’s been done, questions asked, and promises made. Second, we had to learn to say no. If we want more resources, start saying no more often.
We have—“Sorry, we don’t have time for that. It’s important to you, but it doesn’t rise to a top priority for our department based on leadership expectations.” Saying no frees time. Does that time fill up? Yes—but it fills with higher-priority, more interesting, more relevant work. So slowing down doesn’t necessarily mean slowing down overall. It means slowing one thing to make time for another—maybe more time with family. In my case, it’s more time on the work that matters and less on the work that doesn’t. There are multiple ways to interpret and apply what Martin has presented.
Neville Hobson:
I agree, Shel. To repeat your point, each of us is different. On saying no, I learned that too and put it into practice this year by turning down opportunities for paid work because they weren’t the right fit or interesting enough. I felt great. Instead, I learned, read, wrote blog posts—things I wouldn’t have had time for. I saw my grandkids more; my wife and I went out to lunch and visited places. I sleep well now without the worries and stress. I do miss some things, but the balance now outweighs any of that by a big margin.
Martin looks ahead and wonders what comes next once we’ve mastered the tools. When AI becomes mundane and the question shifts to meaning and impact, what might the next conversation look like? He asks how communicators can lead the dialogue on quality, ethics, and human contribution in a world where automation is taken for granted. That’s at the heart of much of what we’ve been discussing—and what I want to keep exploring. What do we say when automation is taken for granted? What are your thoughts?
Shel Holtz:
I think it’s consistent with the old Melbourne Mandate from the Global Alliance for Public Relations and Communication Management. It positioned communication at the center of the organization as, more or less, the conscience of the company. What the conscience focuses on varies with circumstances, but maintaining humanity in the organization fits that model well. We need to present leadership with data supporting the need to do that, along with ideas for how to do it without losing momentum. It fits squarely with that notion.
Neville Hobson:
We’ve shared some good thoughts here, Shel, and I’m curious what our listeners think. If anyone has a perspective—agreement or disagreement—let us know.
Shel Holtz:
You know where to reach us. That’ll be a 30 for this episode of For Immediate Release.
The post FIR #487: Beyond the Churn — Slower Publishing, Deeper Thinking, Better Outcomes appeared first on FIR Podcast Network.
Sentiment analysis has become a default metric for communicators. If sentiment is positive, trust must be high. But if your company’s words are diverging from its actions, trust could be eroding while sentiment remains constant. You won’t know until it’s too late. The new metric to consider is “trust velocity.” Neville and Shel unpack it in this monthly long-form episode for October 2025. Also in this episode:
In his Tech Report, Dan York reports on AI browsers and Mastodon’s approach to BlueSky-like starter packs, but in a consent-based manner.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, October 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson: Hi everybody and welcome to For Immediate Release. This is episode 486, the long-form episode, the monthly ones we do for October 2025. I’m Neville Hobson in the UK.
Shel Holtz: I’m Shel Holtz in the U.S. and we have a jam-packed episode for you today. So we’re going to jump right into it. Before we get into anything substantive though, I do want to ask that if you have any comments to share with us, and we always hope that you do, we have a number of comments to share with you today, please leave them on LinkedIn, where we share our posts, on Facebook, where we share our posts. You could even do this on threads or what’s, not WhatsApp, Blue Sky. That’s the one, Blue Sky. Yeah. You can send email to fircomments at gmail.com, attach an audio file. We haven’t had one of those in forever. You can record those audio files on our website at firpodcastnetwork.com. And you can always leave your comments right there on the show posts at firpodcastnetwork.com.
Neville Hobson: Blue sky.
Shel Holtz: And we do appreciate your ratings and reviews as well. So let’s get started with the rundown of our episodes from the last monthly episode in September till now. Before we do that, though, we had a couple of comments come in for some earlier episodes. You know, I was thinking, geez, these aren’t the episodes that we did between the last monthly and now. Then it occurred to me that, it’s a podcast. People can listen whenever they want. And apparently that’s what Sally Getch did. She said, yes, I am months behind listening to episodes, but I can’t believe no one mentioned the public library website is a way to get around paywalls. Contra Costa County Library gives you access to a ton of newspapers and magazines. So if you’re trying to get around a paywall, check out your local library’s website. They may provide access to that. And I have to confess, not something that occurred to me. Then episode 478, we have a comment from Steve Davis in Australia. Steve Davis says, Hi, Shel and Neville. I haven’t heard the term AI doomer before, and I don’t know if it’s right for me or not. We’ve been integrating AI tools for about two years in our practice, but we’re very focused on integrating them in a nuanced way. Maintaining the primacy of the human aspect of our creative and marketing business is key, and it’s what brings a lot of value.
Neville Hobson: Me neither.
Shel Holtz: AI tools and particularly large language models helped me do some grunt work in pulling some of our ideas together, but always on a short leash. Because of the name of our business, we have a great affinity with Oscar Wilde. There’s only one thing worse than being talked about, and that’s not being talked about. So I start every workshop with one of his quotes. However, at a recent one using AI in a thoughtful way, I actually wrote a song for Oscar to sing. You’re welcome to use any or all of this if it helps illustrate an approach to thinking about our charming little robots. I have a couple of thoughts on this before I go to his comment on episode 479. The first is, well, thank you for that little ditty, Steve. And rather than the usual music we use to play out the episode, we’ll play that and we will include a link to your musical channel on YouTube in the show notes. But in terms of P, the Doomer, these are the people who believe that AI is going to end humanity. And P-Doom is the percentage of risk that you think there is around the potential for AI to end the world. And in AI circles, they tend to ask each other, what’s your P-Doom? And everybody knows what that means. Oh, 70%, right? So. That’s what that means. I don’t necessarily subscribe to this concept, although I did listen to a podcast with a guy who’s been working in AI since the 1990s. And he has come around to actually believing very strongly that this is a likely scenario. So it’s always something worth considering.
Neville Hobson: It’s totally new to me. I’ve not heard of that term before.
Shel Holtz: It may be an American term that we bandy around here, although I hear it on podcasts all the time and you can listen to those anywhere, right? For episode 479, Steve writes, hi guys, I’m not sure you get these emails. And that’s because I only check our email once a month before this episode. We don’t do comments during our short form midweek episodes. So Steve, that’s why you didn’t hear us reference it, but.
Neville Hobson: That’s true. You
Shel Holtz: Do appreciate these. says, I’ve just been listening to episode 479 and the noble idea of creating a community site to get authentic content and attract LLMs. The problem is that increasingly people are contributing AI generated content into these places in lieu of actually writing content themselves. LinkedIn has basically become a cesspool. I hold little hope for new communities to thrive unless there is some mindset shift or mechanism to ensure what’s being shared is being communicated directly by humans. If not, these community projects will become yet another source of plastic information. That’s probably not as harsh a reading of the situation as the Oscar Wilde song I sent you last week, but it shares some common ground. And to that, I would only say, I guess your experience on LinkedIn is based on who you follow, because most of the people I follow are definitely still writing their own content, and I’m able to speed through anything that looks disingenuous or as you say, Steve, plastic. But I also think that those communities in some regard are alternatives to being found in search engines. Building the community and having those people engage in conversation I think may find its way into LLMs, but it’s also an alternative for getting your content out to people as opposed to having them go to the 10 blue links on the first Google search engine results page. Just a thought.
Neville Hobson: make sense? Totally. No great comments, Steve and the the song really super I must I enjoyed listening to that good voice singing that’s really super. So let’s look at the few episodes that we’ve done since the last monthly. That was episode 482 we published it on the 29th of September. The lead story in that like all our long form episodes. This has six topics. That’s why they they take long. The lead story in this one, though, was talking about work slop. So it’s kind of like AI slop except in the workplace. So it’s the latest term referring to low quality AI generated content in the workplace that looks professional but lacks real substance. So we explored the sources of this, how big a problem it really is, and what can be done to overcome it. A couple of other topics in that episode, notable to mention, Chris Hewer is at work on a manifesto for the H Corporation. And he had an online session, which I participated in, of people interested in this, which is, in a sense, making organizations human centered. It needs a lengthier narrative to go along with that to help you understand what it is. But he covered that in what we commented on, which was a summary, lengthy summary in a post on the recent online discussion leading up to talking about the H Corporation. So it’s definitely worth listening to that. And then one other.
Shel Holtz: Well, we have a comment actually from… I’m sorry, I’ll make a time code. I thought you were moving on to the next episode and we have a Chris Hewer comment.
Neville Hobson: Okay? Okay? No, no, I was going to mention one of the other topics we talked about. That’s okay. That’s okay. You can have to do a bit of tricking editing here because I’ll just go straight into mention. Okay. And one other topic we talked about, which we’ve talked about a few times on the podcast, communicators everywhere continue to predict the demise of the humble press release. But one PR leader has had a very different experience with that intriguing snippet, you can dive in and take a listen to that. So that was 482.
Shel Holtz: And we had a comment from Chris Hewer, in fact, thanking us for adding so much incredible value to the conversation on H Corp Manifesto. I am truly honored to have had your friendship and support now for almost 20 years, but in this big moment, the stakes couldn’t be higher. Particularly important in our efforts to define the human-centric organization is a point you raised that I hadn’t properly considered. The role of communicators in all this is to ensure that the attention of leadership and stakeholders is placed beyond the short-term efficiency gains and into the impacts of tomorrow. Not only in terms of thinking about the unintended consequences of layoffs, rifts, such as that experienced by Duolingo and Klarna earlier this year, but also in terms of the longer-term consequences of markets losing customers because they don’t have income to buy the organization’s products or services. Maybe we can collaborate on a kit for communicators to lead or host some of these conversations internally. We’ve already created a conversation guide to ethical AI integration stemming from the work Shel and others have done as research fellows of the Team Flow Institute, which we can use to get started. Neville, you replied to that. You said, thanks. Communicators are ideally positioned to frame these discussions in ways that resonate with both leaders, leadership and employees, ensuring the focus isn’t limited to today’s efficiency gains, but also encompasses the longer term human and societal impacts. It’s about helping organizations see beyond immediate cost savings and recognize the broader consequences of their choices for their people, their markets, and their reputations. That perspective feels more important than ever in shaping how AI is integrated into the workplace.
Neville Hobson: I forgot and I left that comment. Thanks for reading it out. Yeah, that was a good comment from Chris, I must admit. So waiting. Yeah, I think I did. You sounded quite eloquent saying all those things. So thanks very much for that comment, Chris. So moving on to the next episode, 483. We published that on October the 9th. This was a very timely one. Again, this is something we’re going to talk about a bit later too, the kind of political consequences of political intervention in topical business issues. And this related to President Trump with the health secretary, Robert F. Kennedy Jr. declaring that the product of Kenview, which is the owner of Tylenol these days, changed hands quite a bit over the years, that this product Tylenol leads to autism in children when taken by mothers during pregnancy. As you probably might expect, the reaction to that almost universally was, don’t be stupid, it doesn’t do that at all. And I’ve noticed some of the commentary here in the medical profession was a little more forthright than they have done in the past, to be very deferential to Trump to not offend him. That was different this time, it seemed to me. So nevertheless, we talked about that, the crisis. made a reference to the big one for them back in the 1980s, with the poison tampering of the products in Chicago that led to a number of deaths, how they recovered from that. Okay, that was then, this is now, but nevertheless, they still did well, as we noted, the stock recovered, it dropped dramatically. After one day, thanks largely, we said, to Tylenol’s Savvy, an almost perfect response to the crisis. So that was pretty good. We don’t have a comment on that one. So let me move on to 484. Yeah, this is kind of interesting one that we talked about the public relations industry, maybe having its Tilly Norwood moment. You might remember it’s not front of mind for everyone these days, but Tilly Norwood is a manufactured AI actress who caused quite a kerfuffle. it was unveiled, uncanny realistic realism was quite extraordinary. So is this the Tilly Norwood moment for the PR industry with the introduction of Olivia Brown, a 100 % AI PR agent that will handle all the steps of producing, distributing and following up on a press release? Is this PR’s future or just part of it, we said? Debate is still going on about that and I believe we have a comment on this one, Joe. Two, okay.
Shel Holtz: We have two, the first from Sally Slater, who says, is so fascinating. I had brief aspirations of trying to automate the whole cycle end to end, but chatted with some journalist friends first. They hell-nodded real quick. Journalists in particular are sensitive to AI disruption. Their jobs are as much as, if not more, at risk as comms and content writing pros. So they really don’t like the idea of dealing with a robot instead of a human. Which is interesting because one of the items that I toyed with making one of my reports today says that the research shows, and this is in the US, that journalists are fine with PR people using AI in their pitches and in their relations with them. By and large, it’s not 100%, but it’s up there. You replied to Sally’s comment, Neville, so you get to hear your words in my voice again. That’s such an insightful point, Sally. It’s easy to frame this as an efficiency debate for communicators. But your comment highlights the other side, the impact on journalists themselves. The erosion of trust isn’t only about what we send, but also about who journalists are dealing with. Once they sense that they’re talking to a robot instead of a human, the relationship changes entirely. And when that robot is reportedly relentless in its follow ups, there’s really no escape from the machine. And then we had a comment from Andy Green who said, I can envisage many. Digital agencies embracing this to cover what they perceive as the PR dimension to a campaign. These people think in numbers, not stories, and not understanding societal issues.
Neville Hobson: Yeah, it’s a big topic. I think the bit that tends to excite people in this particular story was the way in which I’ve seen other people commenting on this, this is an end to end solution in a sense, that you tell the PR, the virtual PR what your campaign is all about, it goes away, plans it, prepares the press release, handles all the follow up and therein lies the interesting bit because… I’ve seen words used to describe it as relentless follow up. It will not stop. And that’s the bit I’ve seen a number of journalists saying, I got to find a way to ditch this damn thing. And that’s not really what they are expecting here. And I can see that offending a lot of people, you know, but is this part of the future? Again, the debate is still ongoing on this. It’s a great topic, though, isn’t it?
Shel Holtz: It is.
Neville Hobson: So then 485 is the last one we did before this monthly episode. We did that one, published it on October the 21st. Is it time to stop trying to go viral? We asked in that one. Crafting content with the intent of going viral has been part of the communication playbook for more than a decade. There was never a guaranteed approach to catching this lightning in a bottle, but there didn’t stop marketers and PR practitioners from trying. So we said this effort is increasingly futile. We talked about how several marketing influencers have suggested that it’s time to move on from the attempt to produce content specifically in the hopes that it will go viral. We talked about, or rather we shared some data points on that and debated whether going viral should remain a communication goal. We disagreed on a number of issues here, but I think we were aligned in the word viral being the kind of confusion point here, because if we take viral out and talk about just the marketing aspect where you’re trying to achieve an outcome from something that is more than just a viral message, but we talked about that some length in this episode. So is it time to stop trying to go viral? We said, yes, basically, didn’t.
Shel Holtz: We did, and Katie Howell agreed in a comment. said, platforms already reward return visits over one off reach and the clever brands are catching up. If your brief says go viral, you’re chasing a metric that won’t help you keep your job. Repeat engagement with the right people is the proper goal. Less glamorous, miles more useful. And another comment from Andy Green, good clarification over strategies, but also need to recognize viral, AKA meme friendly, is at the heart of effective communication. Also greater recognition of the impact of zeitgeist. Check out Steven Pinker’s latest book, When Everyone Knows.
Neville Hobson: Good comments. No question.
Shel Holtz: Yeah, and thanks everybody for your comments. They really contribute a lot to these monthly episodes, so keep them coming.
Neville Hobson: Yeah, we’ve had discussion that is not part of our kind of rundown list of topics. So it’s really good to have this kind of off the cuff stuff entering the discussion.
Shel Holtz: Very quickly, I just want to let everyone know that the latest episode of Circle of Fellows is up on the FIR Podcast Network. This is the monthly panel discussion among fellows of the International Association of Business Communicators. This was a really interesting discussion about the fact that the role of the communicator keeps evolving, but the goal of supporting the business and the business plan remains absolutely fixed. And how do you square that particular circle. had Laurie Dawkins, Mike Klein, Robin McCasland, and Martha Lozichka on this episode. It is up now. We’ve just recorded it this past Thursday. It’s available for your listening pleasure. Episode 122 is on November 20th at 5 p.m. Eastern time. And another interesting topic, it’s about preparing the next generation of communicators. And we will have five fellows in addition to me moderating this episode. It will be Diane Gajewski, Sue Heumann, Theomari Karamanis, Letitia Narvez, and Jennifer Wah. So put that on your calendars, November 20th at 5 p.m. for that conversation. And now we’re ready to jump into our stories for this episode, but first, we’re going to try to sell you something. Social media and the e-tension. E-tension. Everything gets an E in front of it now. Social media and the attention economy are both changing how we communicate externally and how we engage internally. There’s an article I read, a really fascinating article, that asks whether marketers should lean into rage bait, the use of content designed to provoke anger and engagement. The second article that I read on this topic, also very fascinating, looks at how social media content and consuming it, including contentious posts and RageBait style content, is affecting the mood of people at work, how it’s affecting productivity and how organizations might respond. So something here for those of you who are considering using RageBait in your marketing and something here for you who deal with internal communications, all about this issue of RageBait. So let’s start with the external lens. The piece from Morning Brew via their brand strategy column is titled, Should Marketers Lean Into RageBait? makes the basic point that in an era where attention is the scarce commodity, brands are increasingly tempted and some are already experimenting with campaigns designed to provoke outrage. They stir up audience reaction, they polarize conversations, and all of this drives engagement. The article lists examples. Brands like American Eagle, ELF, The Ordinary among them have been accused of or credited with rage baiting in various forms. The proposition is actually compelling. It does drive clicks, it drives shares and visibility. But the caution coming from the article is just as strong. Not all press is good press. Building exclusion or anger into your brand could carry long-term risks to trust, reputation, employee sentiment, and ultimately consumer loyalty. Now, switch to the internal lens. This is an article from HR Dive. It was titled, How Thirst Traps and Rage Bait Affect Workers on the Clock. Reports on research from the Rutgers School of Management and Labor Relations, researchers surveyed workers over two studies, asking them to reflect on the most salient social media posts they saw that day, how it made them feel, and how productive they were. The results? When people saw fit pics or family posts, they felt more self-assured and engaged, but when they consumed contentious content, politics, rage-bait-style posts, They felt anxious, withdrawn from coworkers and less likely to engage productively. The takeaway content that triggers emotional turbulence may carry hidden organizational costs from distracted employees, reduced collaboration, maybe even increased attrition. So what should organizational communicators, that would be all of you, do with this convergence? Let me offer four action points. First, Be intentional about how your external messaging may echo internally. If your brand is experimenting with provocative campaigns, know, calling out norms, stirring up debate, maybe even courting a little outrage, ask yourself, what are the internal ripple effects? An external campaign that invites polarizing reactions may energize some audiences, and I would argue possibly just in the short term, but internally, it could signal to employees, our company is comfortable being controversial. That could boost excitement for some, but raise discomfort for others. Communicators need to work with HR and talent teams to assess sentiment inside the organization. How are employees reacting, especially those who engage on social media during the day? Are we inadvertently creating internal friction or disengagement by the tone we adopt externally? Second, create and enforce thoughtful social media consumption policies and supports. The HR Dive article suggests a practical approach. Treating social media use at work like a smoke break. Designate pause time, especially during heavy project phases because volatile content can hijack someone’s attention. As communicators, you should partner with HR and IT to provide guidance and structure. For example, quiet hours of connectivity, guidelines for personal device usage during high-focus times, and training about the emotional contagion of negative or polarizing social content. Document the rationale. productivity, wellbeing, and team cohesion. Third, align external and internal narratives around emotional tone and brand purpose. If your brand decides that provocative is an acceptable part of your tone, then the story you tell employees must reconcile with culture. Yeah, we provoke debate, but we do so respectfully, grounded in purpose and we value inclusive dialogue. That alignment prevents cognitive dissonance. You know, our brand is edgy externally, but our culture is conservative internally. And it supports trust, which we’ll be talking about in greater detail later. It also gives you a stronger foundation if things go sideways. You’re always defining the why behind your tone and you’ve prepared employees for what that means. And fourth, monitor, measure, and respond, both externally and internally. Externally track engagement metrics, brand sentiment, media coverage, social listening signals after campaigns that lean into controversy. Internally, partner with HR to monitor employee sentiment, surveys, pulse checks, be ways to do this. Collaboration indicators, retention metrics, maybe productivity signals tied to social media exposure. If a campaign triggers backlash, you should have a response plan. How would you communicate internally? How will you support teams to cope with any fallout? How will you pivot? Look, we’re working in a moment of heightened sensitivity. Clearly, socially, culturally, digitally, The attention economy is powerful, but the rule book is not what it used to be. What once was any attention is good attention is not universally true anymore. For external campaigns, building outrage may bring short-term visibility, but a thoughtful internal strategy, may undermine employee morale, productivity, and trust. I just said that wrong and it’s bad enough that I need to reset. For external campaigns, building outrage may bring short-term visibility, but without a thoughtful internal strategy, it may undermine employee morale, productivity, and brand trust. And inside the organization, the invisible cost of consuming polarizing social content is real and measurable. So the role of the organizational communicator becomes as much about internal ecosystem design as external narrative design. You’re not just pitching or broadcasting. You’re orchestrating tone, channel, culture, and consequence. If you lean into controversial creative, do so with your eyes open. Prepare your employees, monitor sentiment, align culture, and build in the feedback loops that show you’re aware of the risk and capable of navigating it.
Neville Hobson: Yeah, interesting. Listen to what you’re saying, Shel. My own thought is I have almost an inbuilt instinct that this is a bad thing, rage bait. And indeed, reading the marketing brew piece you a comment from a TikTok creator called Dulma Altan, who posts about business strategy struck me as probably summarizing the whole thing. If rage bait becomes a widely adopted strategy, she says, she anticipates that audiences might unfollow brands to avoid seeing it. And there to me is a big alarm bell. I recognize the interest many marketers have in provocative kind of content, whether whatever we call it, rage bait, okay. Example, I suspect would be something that’s very topical. is the American Eagle’s controversial Sydney Sweeney campaign for genes. Just one recent example, Marketing Brew says, it got plenty of people talking. Company execs defended the campaign, which drew backlash over the summer for its reference to genetics. And the chief marketing officer, Craig Bromance, told Marketing Brew that the campaign wasn’t designed as intentional rage bait, but instead aimed at sparking a conversation about optimism, confidence. and self expression. You can spin it any way you want, frankly, but he says so far, the company has reported initially positive results. So therefore I could say, okay, basically what you’re saying is the the end justifies the means then right? Is that what you’re saying? Well, you got great results for something that offended and upset a lot of people. Therefore, it’s okay is what I hear your message. I think you’re dead wrong, mate. That’s really not what you should be doing at all. So I wouldn’t support this at all. I don’t see any redeeming feature of this that has honor attached to it at all. you know, there’s other examples we could talk about, but the American Eagle one I did find quite interesting how that played out. And there are other examples too. And it’s interesting the angle you’ve brought into it, on the impact it will have on employees. And I often wonder whether marketers don’t really take that into account when they do some of these things. So this is not good, in my opinion.
Shel Holtz: Yeah, I agree with you. I don’t think marketers consider the impact of any of their campaigns on employees. Employees are not their audience. Although employees are, let’s face it, expected to reinforce and support the message of marketing. So your employer brand could take a hit unintentionally from something like this. But you also referenced from that article that the people who are offended by it could stop following you. What that means is that the people who are still following you are the people who are likely to succumb to this kind of rage and enjoy getting worked up and angry and hurling invectives at other people in the conversation. Is that the market you really want for your product or service? So, yeah, I think there are some long-term consequences of engaging in this kind of marketing that you really have to sit back and consider. It’s a very strategic approach to, it’s a critical thinking approach. to taking on this particular type of marketing activity.
Neville Hobson: Yeah. And again, going back to marketing brews article, they had some interesting comments from people they talked to about this. One in particular struck me from Megan Morris, the co-founder and CEO of a creative agency called Full Fat. She commented, quite interesting, if a brand is intent on courting controversy, she pointed to a Doritos campaign from early this year in which the brand implied it might change its signature triangle. chip shape into a square as exemplary an example for example the campaign drew some backlash and plenty of online chatter but all in good fun that is really nicely done rage bait if there is such a thing morass said megan trust me there is no such thing as nicely done rage bait okay it’s right it’s it’s nothing serious said morass it’s not going to create any emotional or behavioral triggers so
Shel Holtz: It’s faux rage bait. It’s what it is.
Neville Hobson: You could do that. Why would you not do that? Why would you go ahead and do something that seriously offends people, risks damage to your brand and your audiences across social channels and elsewhere who could have dropped you and don’t talk about you except in negative things? Why would you want to do that? This is good. But let’s not call it rage, please, no matter how nicely done it was. So that was a good one, Shel. So let’s talk about AI, actually. We haven’t talked about AI yet. Yeah, I found this a very interesting one. And it’s in the financial services industry. I doubt it’s unique to this particular institution, Lloyd’s Banking Group. But they’re doing something at sending a very public signal about AI, starting at the top.
Shel Holtz: AI, yeah, we should talk a little bit about AI.
Neville Hobson: The CEO Charlie Nunn and the entire executive team are enrolled in a six-month, 80-hour generative AI program designed with educational technology company Cambridge Spark and University of Cambridge experts with hundreds of senior leaders also in scope and more than 110 already through the course. Lloyd says that the program blends hands-on sessions, virtual master classes, and real-world projects with potential future Gen.AI use cases put forward to progress to pilot phase. These include using Gen.AI to support market insights, customer relationship management integration for commercial customers, freeing up time for strategic high-value client engagement, and improving overall customer experience and retention. This sits alongside a group-wide scale-up of Microsoft 365 Copilot. So it’s not training in a vacuum, but part of a platform commitment and an enterprise change program. Reactions unsurprisingly are mixed. Supporters say you cannot lead what you do not understand. Executive fluency creates permission, governance and budget for real change. Critics call it AI theater, a costly slow course in a fast moving field and argue a sharp briefing cadence would beat a six month syllabus. Both can be true. Literacy without delivery is performative. Delivery without literacy is reckless. If this is more than theater, we’ll see it in KPIs, resolution speed, onboarding time, and front-line productivity, not in slide decks. So glancing at some of the reactions on LinkedIn illustrates all of this. One supportive post on the business network said, I think this is a great move by the bank. The only way to understand and gain the full benefit from AI is a full understanding of the benefits and problems from the top downwards. One commenter said, being a little deliberately controversial, I do wonder if some of the staff and directors will return from these courses with a very different view of AI’s potential, not just for good, but for harm too. Other comments included empowering leadership with AI literacy as a foundation of sustainable innovation and bold move and sets a strong message internally and to the industry. But another post is more critical saying that Lloyd’s AI training courses are a waste of time. and money and a weak attempt at demonstrating that they’re staying up to speed on tech developments. What they learn in month one will be mostly obsolete by month six. One commenter on this post said, most AI education is not as good as what you’d get if you just asked the AI to teach you about it. Well, according to Lloyds, their entire executive committee and senior leadership team are expected to complete the program by the end of 2026. The communication question. is whether this becomes a credible narrative of capability and control or raises expectations that the bank cannot possibly meet on customer experience, risk or productivity. Is this smart leadership signaling that builds real execution muscles or a well-packaged promise that will be hard to cash? I do wonder.
Shel Holtz: I think this is a great idea. I am 100 % behind this. We had the conversation on Circle of Fellows day before yesterday. Remember, the theme was that the role of the communicator remains fixed. It is to support organizational objectives and goals. However, the way we do this continues to evolve. And one of the questions that came up was, how do you learn new technologies? When you are a chief communication officer, for example, you’re pretty busy. How do you learn new technologies? And one of the answers was, well, reverse mentoring. I don’t think reverse mentoring is adequate for AI. The implications for business, for your business, are too huge to rely on a young employee saying, here’s how I use it. Isn’t this cool? Look what you can do. I think you need to understand strategic implications here. I think you need to understand how this is going to affect the way your business is managed, the way the work gets done, the way your organization interacts with customers and other stakeholders. How are things going to change? I think, you we’ve talked about a lot of these implications in previous episodes. The fact that there are going to be job displacements. The fact that entry-level jobs are going to change dramatically if 75 % of the work that an entry-level person does, because it tends to be more grunt work. If that can be done by AI, what does that do to entry-level jobs? You have to have entry-level jobs where your future higher-level employees are going to come from, if not from those entry-level jobs. So there’s a lot of thinking that has to be done around this. And what I see in most organizations is training being pushed down to the people who do the work, not the people at the senior level of the organization. They’re the ones endorsing training for everyone else. They’re not learning it themselves. And how can they lead the organization through this kind of change? And let’s be clear, we’ve never seen the kind of change that’s coming before. And I think this is the way to do that. You get your leaders completely immersed in it and trained in all the dimensions that are going to have an impact on your organization. I think it should be a requirement. And I think it’s actually kind of sad. that there’s an article about an organization that’s doing it because that means that most of them are not.
Neville Hobson: Yeah, I’m with you on that. And I agree fully that I think this is an extremely good idea, a very good initiative. When I was researching and looking into it, it became clear to me that this was truly immersive up and down the every hierarchical level in the organization. So that at the end of this 80 hour program over a six month period, the leadership team and the whole executive committee are going to know what it is, how it works, what are the impacts in X number of areas in the business throughout the entire business to understand the magnitude of what this means. And it puts them in a very good position, I think, to ask questions, to support it, to debate it. If they don’t want to support it, they don’t think it’s very good, that’ll stimulate further discussion. So it’s kind of like They are on the same level as the folks lower down the organization who doing all that grunt work you mentioned. And they will understand more about the value of that and what support those people need elsewhere in the organization. So it is interesting and it’s totally unsurprising to see some of the criticisms out there that is just illustrative of people’s different opinions. And I think the fact they’ve done this and then the scale up of Microsoft 365 pilot. in parallel with this in a wider part of the organization. They’re taking this extremely, extremely seriously. And I’m sure other institutions in the financial industry are doing similar things, bet, except I saw Lloyd’s writing about it themselves. I saw media reports on what Lloyd’s Lloyd’s press releases about some of the stuff they’re doing were extremely well explained what this is all about. So I think it is something that is worth paying attention to how they’re doing with this. So it’s good that they have shared this information.
Shel Holtz: I’m baffled at the notion that there’s criticism of this. You don’t want the leaders of organization to be up to speed on the technology that’s going to change the entire world. mean, come on. I just don’t get that. These are the people who are going to be guiding the organization through this change. And if they don’t understand it, they’re not going to guide the organization very well.
Neville Hobson: Go figure.
Shel Holtz: Yeah. Well, there’s a new report out from McKinsey. They are a report machine. They are a report factory. So there’s always going to be new reports out from McKinsey. This is a good one, though, and one that as we are so focused on AI and some other issues that organizations face on a daily basis is one that’s sort of bigger. And we may not be seeing the forest for the trees. The role of corporate affairs is being redefined in a world of rising geopolitical and geo economic complexity. If your eyes glaze over, that is probably part of the problem. Because if you work in communications, external affairs or stakeholder relations, this really is something you need to pay attention to. They begin this with something that we all are sensing right now that geopolitics is back and in a big way. After decades of a relatively stable global order, we’re now seeing rising trade policy shifts, export and import controls, sanctions, regulatory fragmentation, and governments using economic tools in strategic ways. These changes are putting new pressure on companies, not just on operations or supply chains, but on how they engage externally, how they structure themselves, and how they tell their story. McKinsey’s research finds something really intriguing. While executives view trade policy change and geopolitical instability as major risks, only 28 % say trade policy change is a top leadership priority and just 15 % say geopolitical instability is. So there’s a gap between risk and attention. That is where we step in. The article lays out a five-point playbook for upgrading the corporate affairs function. I want to walk you through these and then we’ll talk about what you can do in practical terms. Number one is map the world. Which geopolitical trends matter for your business? When you’re exposed, what the value at stake is, who the key stakeholders are. Many companies don’t quantify their exposure. McKinsey says only a small share do rigorous modeling. Find your corner solutions, you know, your best case and worst case, so you know what you need to be prepared for. Next, hone your narrative and strategic offering. Once you know the terrain, you need to refine how you talk about it. McKinsey notes that language matters. For example, the difference between the word advocacy and the word lobbying. And leaders are increasingly expected to act as commercial diplomats. For communications people, this means your story can’t be generic. It needs to reflect the geopolitical context behind the business and tailor the message accordingly. Third, optimize your engagement. It’s not enough to hope that someone will listen. You need to engage proactively with regulators, governments, associations, third parties, and choose the right level and channel. For example, you might find local or state or provincial officials more relevant than national ones, or you might use a trade association forum rather than a press event. It’s about effectiveness, not volume. Fourth, adapt the function’s organizational structure. Corporate affairs can no longer be an above the line support activity. It needs to be embedded in the business units, aligned with key metrics and operationally relevant. For communicators, that means moving from being message makers to being strategy partners with influence on business decisions and operational design. How much do we talk about wanting a seat at the table? Wanna know how to get one? Finally, the article calls for upgrading skills, analytics, geopolitical insight, and even AI. McKinsey notes that while AI and analytics can help, communicators must be paired with these AI tools in order to provide that human judgment. This means getting comfortable with new tools, but also building the strategic thinking around them. So what does this mean for you, the organizational communicator right now? First, advocate for and help lead the map of the world exercise. Partner with your risk operations and strategy folks to identify the top geopolitical and trade policy issues that affect your organization. Which markets are exposed? Which supply chains cross sensitive jurisdictions? What’s at risk if things change? Number two, revisit your messaging to reflect these stories and risks. If your company is operating in geopolitically sensitive markets, your external and internal narrative should acknowledge that and not simply assume a Global will work the same way everywhere, story. Choose language that resonates with regulators, governments, local stakeholders. Point three, review your stakeholder engagement plan. Are we talking only externally when something goes wrong? Are we connecting with the right offices and arenas? Should we be attending different forums, using different channels, working with different third-party partners? This is your chance to make the engagement smarter. The fourth action point, elevate your role in the organization. Use this geopolitical context to show how communication really matters to the business’s license to operate. Work with business leaders. What do they need? What are the key risks for them? How can you help structure narrative engagement and intelligence so it’s not just a communication afterthought? And action point five, embrace new tools and insight. You don’t need to become a data scientist overnight, but you should start experimenting. Just one thought, use scenario planning for external affairs. build a what-if narrative for potential trade disruptions or create a dashboard tracking emerging regulatory actions. We’re not just talking about a world where communications can add polish. We’re talking about a world where communications and external affairs and by extension, organizational communication has to be strategic, agile and grounded in real business context. As McKinsey puts it, your function can either shape the geo-economic environment or be shaped by it. If you’re in communications, raise your hand and say, let’s upgrade how we do this. Let’s map our exposures, refine our narrative, engage more smartly, embed ourselves in the business, and build new capabilities. Because this change isn’t going to slow down anytime soon.
Neville Hobson: That’s a very interesting report from from McKinsey, I think you did. You did a pretty good job in in in summarizing the whole thing and those very steps you mentioned. It’s hard to pick a topic that is that you that you didn’t cover. In fact, you you cover them all. But the one that I would definitely reference is the is the towards the end of the report where it talked about leveraging AI. It made me think instantly of the Lloyds Bank executive training. So this talks about leading enterprises are creating AI lighthouses. Now that’s a new phrase to me, but I can visualize a lighthouse. I mean, I kind of get where they’re going with that. That incorporates analytical, generative and agentic AI to reimagine their corporate affairs workflows. So for example, it says an AI agent can automate a communication campaign cycle from setting the requirements to generating content to testing messages to tracking impacts. Teams can also use GenAI. to create briefings, combine the external sources of insight on stakeholders with internal insights on policy positions and lines to take. The shift is certainly taking hold, says McKinsey. So that’s just one example. But that illustrates to me that you need to have the kind of thrust of this overall argument in this story, in this article, made aware within the organization at the senior levels in particular, the whole spectrum. that helps them understand why they need to do this. And I wouldn’t be surprised if this or some form of this or an element of this is part of what Lloyds Bank is doing in their executive education program that they’re running. know, McKinsey quotes Sam Altman, the CEO of OpenAI, the makers of ChatGPT, among other things, has gone so far to predict Wait for it. 95 % of what marketers use agencies, strategists and creative professionals for today will easily, nearly instantly and at almost no cost be handled by AI. Now that brings in another element of these kind of elements that are popping up. was in a, what do call it? It’s not a webinar exactly. It was more a Zoom presentation via LinkedIn yesterday on. changing the billable hours function in consulting firms from hourly billing through to value-based charging. Now, I didn’t hear much new in that session, to be honest, but I keep hearing people talking about this topic now. And there’s something I’ve written about two years ago when I was getting interested in it. How would you do this? I said to myself in a blog post. So I sort of said it to everyone basically. How would you change your model suddenly? yet recognizing that your clients who are some of these leading enterprises, are tweaking it quite straightforwardly, very clearly, that, wait a minute, I’ve hired this person or this firm to work with us, and they’re charging us X by the hour, and yet 80 % of that time, they’ve got an AI doing it. So what are we paying for? So those questions are being asked. And indeed, some consultants are asking those, how do I change this before my client asks me to change it? So that’s part of the picture. So leveraging AI to me is part of that very much so.
Shel Holtz: absolutely. And at the speed with which these geopolitical and geo-economical events are happening and the fallout is happening, we absolutely need to be using the tools that can help us stay on top of it and analyze the data that comes in. So I think the workflow recommendation, not just, gee, how can I help it? how can it help me write a headline or edit a press release? We really need to be looking at the data analytics capabilities and how agents can collect information, analyze it, and then do something with it on our behalf, with human guardrails, of course. But I agree with you. If this McKinsey article had come out a couple of months ago, I would say I’d Bet those folks at Lloyd read this, but they’ve already taken their action by the time this came out, which means that they’re even more on top of things than the people reading this and going, hey, I ought to be doing this.
Neville Hobson: Yeah. And I suppose the good thing to mention to conclude, particularly this 95 % thing that Sam Altman talked about, is what McKinsey says that as AI gathered intelligence, they start to say, however, as AI gathered intelligence becomes increasingly commoditized, teams need to supplement it with direct knowledge of events only humans can provide. So that means you need to reinvent yourself slightly, I would say maybe more than slightly to be positioned and perceived and indeed deliver if you can precisely on that. So you’re part of the, I’m going to say the 5%. I mean, it’s not quite like that, but you’re someone with direct knowledge of events only the human can provide. So you will leverage AI to support you. in delivering whatever it is you’re delivering to your client or your employer as part of this big picture of what you need to do in this new geopolitical era. So it’s no small feat this. If you’re not thinking about it, and these kind of warnings people talk about all the time, but it must be clear, I think, to most people. You get a sense of momentum is really building in this. We keep seeing articles. and opinion pieces, the reports you name it almost daily on this about AI, that about AI, along with the critics as well. So it’s a rather muddy picture to understand. who should I pay attention to with all this? But it’s relentless. It’s coming at us left, right and center. we need to be on top of that. Communicators are well placed to help their company employ and navigate this.
Shel Holtz: Yeah, there’ve been a couple of posts lately. There was one from Chris Penn yesterday talking about how you can’t opt out of AI anymore because of the AI browsers now that all of the browser providers are baking into it. But I also see a lot of posts on LinkedIn, several every day, from marketers and communicators saying, AI is never going to take our jobs because of this human requirement and that human requirement. They’re trying to make themselves feel better. And they should be doing exactly what you just said, reinventing themselves.
Neville Hobson: Yeah, it’s an essential thing to do now and you need to get on top of that. So we’ve got down now, right? Yeah. Can we take a quick P break? No, I didn’t. I had lots of water though before that. So listen, the last time I did this, I went out with the earbuds plugged in. When I came back, I completely screwed up the audio. So I’m taking them off. I’m taking them off. I may need to still recalibrate them when I come back, but I’ll leave them off here. Right, I’ll be back ASAP.
Shel Holtz: Yes. and I need to make a time code. Yes. Did you have a code zero? So leave the earbuds there. Okay, no worries.
Neville Hobson: Yadoke. Let’s see, do we still have these things plugged in? Can you say something?
Shel Holtz: Check one, check two.
Neville Hobson: It It worked. Good. Now.
Shel Holtz: getting dark outside. We’re expecting rain today.
Neville Hobson: Yeah, we, our clocks go back overnight tonight. So I think you’re next weekend, right?
Shel Holtz: I was going to ask when that happens. I even made a note. So I think for two weeks.
Neville Hobson: So I think you’re the first Sunday in November. That’s next weekend.
Shel Holtz: I, is it, Alexa, when does the, when do the clocks change in the US? When do we fall back? Daylight saving time ends on Sunday, November 2nd, 2025 at 2 a.m. local time. So, so we’re fine for, yeah, for next Monday. It will be on the same schedule. Right.
Neville Hobson: Next week, next weekend. Right, so next week we’re not doing anything. So next Monday, we’ll still, we’ll be back to the normal eight hours difference. So next week was, yeah. So next week we’re seven hours apart, just FYI. So back to normal following week. Okay. So hang on, let have a quick sip. I’ve got this tickly cough and we had our flu jabs on Thursday and I tell you, both of us, we knocked a bit with this.
Shel Holtz: Exactly. Alexa, stop! Right. OK. Really?
Neville Hobson: I think we had the mild case, not as bad as last year, because last year we had COVID and flu at the same time. This year we didn’t get COVID because the government’s changed the eligibility. And as the cynics say, me included, say it’s because of the same money on the budget probably. So last year it was anyone over 65. This year it’s anyone over 75, unless you’re a special need or all that. So first time since 2020, we haven’t had a COVID yet. Now I could go… to the pharmacy just down the road and pay a hundred pounds and have it done privately. I ain’t gonna do that. Because it’s FluJab and everything else we do is on the NHS, it’s free.
Shel Holtz: Yeah, see, had, yeah, we still get the COVID at 65, so, but not younger. Used to be anybody, so.
Neville Hobson: Yeah, I think people are looking at budgets more than anything. Anyway, so.
Shel Holtz: Yeah, well, we’ll see if there’s another COVID outbreak that leads to a change.
Neville Hobson: Yeah. So you could mention Dan.
Shel Holtz: Yeah, I’ll say something about Dan and then I’ll throw it to you.
Neville Hobson: then we’ll go on to this corporate crisis thing.
Shel Holtz: Right, okay. And my velocity thing is actually pretty short. So, all right, 100.
Neville Hobson: Yep. Ready?
Shel Holtz: Thanks, Dan. I’m sure that was a great report. Neither of us have had an opportunity to listen to it yet. We’ll do that later. But the reports are always great. So I can’t imagine this one’s any different.
Neville Hobson: are. No, I’m sure it isn’t. Thanks, Dan. It’s good to have them. So this next topic is an intriguing one, I think. We’re going to take a look at a report by Provoke Media that editors of the magazine, plus invited experts they brought in, weighed in on corporate communication crises involving 14 organizations. So these included Tylenol, Nestle, JP Morgan, Cracker Barrel, Suntory, Intel, Meta, Adidas, Astronomer, Jet Two and more. We talked about Tylenol ourselves in episode 483 in early October. So 14 corporate comms crisis, that’s quite a bit and many will be familiar to you. The report reveals a world where reputation risk is increasingly shaped by behavior rather than policy. and where leadership conduct, cultural context, and digital amplification combine to test corporate credibility in real time. These crises share a clear pattern. Behavior lights the match, culture supplies the oxygen, and context decides how fast the fire spreads. The organizations that recovered fastest moves with speed and accountability, not legalistic hair splitting. They put people first, sought credible third party validation, and treated employees as a primary audience. Where leaders delayed, defended on technicalities, or ignored the cultural moment, issues lingered and reputations bled. We can’t discuss each of the 14 in detail as there just isn’t enough time. What we can do, I’m sorry, we would have been here till 10 o’clock tonight probably, Shale, I mean, it would have been too long. What we can do though is consider three of them to see what happened and what we can learn. And there’s a link in the show notes to Provoke’s detailed report. So what we’re to do is I’m going to outline the three one by one and in between, Shail and I will have a chat about each particular case as we go along. So we’ll start with Nestle, where the CEO’s secret affair tested the corporate compass, mild, mild way of putting it, I think, an undisclosed relationship between CEO Laurent Frick’s and a senior employee surfaced. a clear breach of the world’s largest food company’s code of conduct. It raised questions about power imbalance, conflicts of interest and disclosure standards. The issue quickly moved from private conduct to public governance, forcing the board to weigh privacy against policy and to show whether values apply when they are inconvenient. Fricks resigned roughly a year into the role. Already suffering from a sales slump and bracing for new US tariffs, The Swiss company saw its share price dip after Frex’s departure, compounding its underperformance. Investors, having seen two CEOs exit within a year and a one-third drop in the share price over five years, were not impressed. I think that’s putting it mildly, Shell, to be honest. what do you think of this undisclosed relationship between CEOs and relationships with senior, or maybe not so senior, in some case, employees? seems to be, dare I say, almost a common occurrence to what you read these days. But the consequences, certainly in case of Nestle, of this crisis were quite serious.
Shel Holtz: very serious, especially since this company has been going through CEOs as if it was a revolving door. And for the world’s largest food company, little stability at that senior level is probably a good thing. But you’re right, this is becoming a routine thing. Probably the most notorious case recently is the CEO of Astronomer, tech company at the Coldplay concert getting caught with his head of HR on Kiss Cam. Both of them married, but not to each other. You would think that senior executives in these roles would see these things happen and the consequences when these secret affairs are revealed and say, I can’t do this, but we are biological beings, you know, and people succumb. So I don’t know that we can stop these stories from emerging when they happen. I can’t imagine that they’re going to stop happening. The question is, how do we deal with these? what did the Provoke article say about how Nestle dealt with this? it, was their analysis that they dealt with it effectively?
Neville Hobson: Well, there’s various comments about that. I was looking at the article because it’s quite lengthy. They moved quickly about appointing a new CEO for continuity, much more than any comment they made about passing judgment on the behavior of the CEO on his romance and more about how his lack of transparency and poor judgment reflect on Nestle. Given that as CEO, he’s the moral compass of the organization. So that was the backdrop behind all of this. And it doesn’t go into any detail about what led Frex to the decision to leave, but he resigned swiftly and left very fast and there was no exit package. And he seems to have vanished now. So I’m just looking to see, yeah, the main comment basically is the share price effect, the negativity of the share price. that was negative on the news of the ousting of the CEO. But public perceptions of Switzerland have not followed suit, they say. According to Calibre, a reputation management firm, the company’s reputation has improved steadily over the past three years, with its trust and like score rising from 50 in the third quarter of 2022 to 69 in the same period of 2025. measure of how much people trust and like the brand on a 0 to 100 scale. This suggests that in Nestlé’s home market anyway, the company is viewed as handling the incident appropriately. It remains to be seen how the change of guard and continued coverage of the story will affect its reputation in the coming weeks and months. So Intrigue is a Swiss company, and the Swiss are different to the rest of Europe in how they regard such things and the behavior. So they were impressed with how it was dealt with within Switzerland, but that remains to be seen how the rest of the world feels about it.
Shel Holtz: Yeah, and of course they’re a global company. One thing that I miss, at least I haven’t heard it or seen it in these types of stories, I haven’t seen it in the astronomer story, I haven’t seen it in the Nestle story, is that they take action relatively quickly. This is a violation of our policy, it’s unacceptable, and this executive will be leaving. And we’re going to hire another one who could very likely do exactly the same thing. We never hear what we’re going to do about this to keep it from happening in the future in order to protect the reputation and protect the brand. And I think maybe that’s something that organizations should think about. In any crisis, you’re supposed to say, here’s what we’re going to do to make sure this doesn’t happen again. Here’s something that is clearly defined as a crisis. And there’s none of that with any of these organizations.
Neville Hobson: Yeah, and that doesn’t come across at all in this story. One additional comment talks about how Nestle’s bigger challenge lies beyond one wayward CEO. Talk about the new CEO, Philippe Navratil, has his work cut out for him, and it will require clear communication about what strategies he’ll pursue to turn the business around. So it’s not about the wayward CEO, it’s the actual business story. that’s not where the problems stop according to this person. Two abrupt CEO departures can erode employee trust and damage culture. Nestle needs to manage its external positioning and rebuild morale internally. So yeah, that new CEO has got his work cut out for it, I’d say. Okay. So then our next story is on Suntory, the Japanese global drinks company. This case is the hard realm of product integrity. The leadership crisis began when CEO Takeshi Ninami resigned amid a police investigation into suspected illegal supplements, although none were found. Executives moved fast with the press conference, while President Nobuhiro Tōri said the conduct fell short of what is expected of a chief executive. So this is Japan, and the culture of how people apologize and address issues is very different than how would be in the West. So… According to Provoke, crisis experts point to disciplined governance in the response, external counsel engaged early, tight board CEO coordination, rapid public disclosure, and deliberate avoidance of commentary on live legal matters, all of which limited speculation and protected shareholder value. The playbook was cross-functional, legal, HR, and comms, moving in sync with internal stakeholders treated as first order audiences to maintain trust. Context mattered too. In Japan, pre-verdicts resignation signals moral responsibility and respect for stakeholders. Notably, Ninami asserted his personal innocence in his capacity with the business lobby, separating that from his corporate role to reduce the company’s reputational risk. The playbook that works is speed, independent scrutiny, and making customers whole, with visible updates that show the remedy is real. choose legalistic defense and you convert a product issue into a long tail credibility problem. And by the way, Shell, I didn’t see any reference to how would they fix this to avoid this kind of thing in the future. It’s not the same as a relationship imbalance with an employee. If anything, this is slightly more serious, legal supplements being imported into Japan. So how do they prevent this? haven’t seen anything mentioning that yet, but this is again an interesting one. And maybe there are elements of this that would be interesting to learn from a Western perspective, because in Japan it’s very different, the culture.
Shel Holtz: Yeah, I don’t think this one requires that, what are we going to do to prevent this in the future element quite as much as one that just keeps happening, right? But this is a case where a guy, if you believe him on the face of it, bought something he thought was legal that wasn’t, and it ran afoul of the company’s drug policy. That’s pretty clear cut. He violated our direct policy. He’s out. And as long as they move swiftly, and I think, you know, your internal communication is probably more important than your external communication on this one. This tells you that there are two classes in the organization, right? If the CEO can be canned for this, means obviously you can too, but in a lot of organizations, the leaders would get away with it because they run things. the employees would be treated differently. So I think there’s a positive message you could actually send to your employees here is that we treat everybody the same. And that might be a message that would be worth taking externally as well. We don’t distinguish between the leaders and the frontline employees. Everybody is subject to the policies of this organization and we executed on this policy. Done deal, moving on. So yeah.
Neville Hobson: Yeah, did get, Santori did receive quite a bit of praise from critics. One person, Caroline Shoya, the founding director of the PR group in Australia, said that Santori’s handling of the incident demonstrated strong governance discipline and execution of its crisis playbook. The immediate use of external legal counsel, board CEO coordination and quick public disclosure. help limit speculation and protect shareholder value. Its speed and clarity of communications reinforced accountability and transparency. The company avoided commenting on ongoing legal proceedings, avoiding further risk. And in the case of the employees, this was quite interesting too, that she said, Ninami’s resignation highlights how leadership integrity issues can quickly spiral into company-wide reputation risks. A leader’s personal conduct impacts corporate trust, making it a governance and communication priority. boards and corporate affairs leaders must actively anticipate this risk and be prepared to address it. So that’s part of that overall picture, which clearly, according to the critics, they praised how they dealt with it and the speed at which they did it and the transparency under which they did it too.
Shel Holtz: Yeah, and both of these cases are cases of personal behavior affecting the organization. In this case, I think there was an extra dimension to it because of this particular individual status. He was a former advisor to prime ministers before he took this job. He was Japan’s corporate face at the Davos World Economic Forum. He is a very prominent and very well-known figure in global business world and this probably shook a lot of people. But at the end of the day, he took drugs that he wasn’t supposed to, so he’s out. And I think there’s a positive message that you can share with that.
Neville Hobson: Yeah, he did consistently talk about he believed that he was doing nothing wrong at all. He was had the right to do this. That’s what he’s been saying. I’ve not seen anyone saying yes, you’re right. No one’s agreeing with that. He’s not getting criticism either. But the consequences are still there.
Shel Holtz: I think his message should have been, I truly honestly believe these were legal. I should have checked more thoroughly before I purchased these. I am going to be doing that in the future and I regret my behavior. So easy enough.
Neville Hobson: There you go. So that’s a good advice there, I think. So the final look is at a case that’s in the US. Certainly you’re very familiar with this, know you are, Shaldee. The ABC Kimmel, Jimmy Kimmel crisis. It’s a case study in how political intimidation collides with stakeholder power. So after talk show host Jimmy Kimmel criticized the Trump administration live on air for exploiting the killing of activist Charlie Kirk. Brendan Carr, the Trump-appointed chair of the Federal Communications Commission, that’s the broadcasting regulator in the US, warned ABC the easy way or the hard way, reminding broadcasters their licenses depend on his agency. Within hours, Nextstar, which controls over 30 ABC affiliates, dropped the show. ABC followed. Sinclair went further, demanding Kim will make a payment to Kirk’s family and the right-wing group. The reaction was immediate. Hollywood guilds condemned the move. Protests erupted in New York and Los Angeles and consumers threatened boycotts of Disney parks and cancellations across Disney +, Hulu and ESPN. Former Disney CEO Michael Eisner publicly rebuked. Current Disney CEO Bob Iger, where’s all the leadership gone, he said, sharpening the governance glare. Over six days, more than 1.7 million paid subscriptions were canceled. Advertisers were targeted online. and under sustained pressure the program was restored, even Sinclair reversed. The lesson is not subtle. An easy jerk tilt to appease a hostile regulator ignores the whole stakeholder universe. For communicators, the job is to map power beyond the politician. Customers, talent, affiliates, advertisers, and defend principle with a plan, not a panic. ABC and Jimmy Kimmel is a crisis by choice when commercial caution appears to muddle creative expression. The options are stark. Principles for Sale that invites questions about editorial independence, sponsorship guardrails, and whether a broadcaster can defend free expression while protecting its commercial relationships. Transparency about standards and boundaries beats backstage contortions every time. I don’t know, does that sort of summarize it all pretty well in your take in terms of what happened?
Shel Holtz: It does. This was a case of an organization jerking its knee and acting very rashly and very quickly without thinking in terms of the long-term consequences. I think at the executive level, you get very insulated and forget what the common folk might do. But when the common folk band together, 1.7 million of them might cancel their Disney Plus subscription, which is exactly what happened over that. That is a ton. of money for a company that is increasingly reliant on its streaming service for its revenue. So I think that may have been part of the calculus in bringing the show back as quickly as they did. And his opening monologue, which I watched on YouTube, I thought he handled very deftly without kowtowing. He did apologize for anybody who was offended, but he didn’t make the apology that some people wanted him to make. I heard, and I don’t know if this is accurate, but I heard that Bob Iger, the CEO of Disney, was contacted relentlessly by A-listers in Hollywood telling him this was bad, you shouldn’t do this, and that that factored considerably into the decision to bring Kimmel back, which, you know, frankly pissed off Trump and his base. But He’s still there. And frankly, his viewership has gone up considerably as a result of this. So for Brendan Carr, the head of the FCC, if he thought this was going to get rid of Jimmy Kimmel, all it did was give him more viewers. So here’s another person who should have thought more long-term and more critically before jerking his knee and making such a ridiculous threat.
Neville Hobson: Yeah. So the final comment to add to this is from Provoke’s article where they quote a crisis communications program name to ask not to be identified saying this is about the cancellation. saw that real people still have the power to influence corporate decisions. It was a reminder to communication professionals that they need to remind CEOs that they have to take the whole stakeholder universe into account when they make these decisions. You said something similar, I think. yeah. So three examples.
Shel Holtz: Yes. What do I pay? I think I pay $20 a month for a Disney Plus subscription. I mean, they’ve got Marvel, they’ve got Star Wars, they’ve got National Geographic, and several other properties. Now they’re folding Hulu into Disney Plus and ESPN. So for 20 bucks, that’s a bargain. And 1.7 million people said, I’m not going to pay my 20 bucks a month, my 240 bucks a year. multiply 1.7 million by 240, that’s a chunk of change that they can’t afford to lose. So that is power. And when consumers decide they’re going to wield that power, they definitely can’t have that kind of influence.
Neville Hobson: Yeah, you’re right. I also subscribe to Disney Plus. I don’t pay 20 bucks equivalent, I pay nine pounds, which is about $11, I suppose. Doesn’t give me ESPN, gives me Hulu and a lot of stuff there. It is good. It’s probably, I would say after Netflix, it’s the one I watch the most more than any other. So, okay, so three crises we’ve looked at of the 14 provoke disgust. I would say across all of these crises, the lessons consistent. Behaviour creates the crisis. Values aligned action resolves it. Is it that simple? It seems so to me. Read the room, act at the speed of trust and rebuild with proof. Note that phrase by the way, act at the speed of trust. Rebuild with proof, independent reviews, measurable fixes and ongoing debates. Anything, anything finally you want to add to this topic?
Shel Holtz: No, we’re going to move right into the next topic because it talks about trust velocity. So it is a perfect segue. Unless you want to talk about what other streaming services we like. We have entered a new era in how trust works, not just for tech companies, but every kind of organization. In the past, trust was something you built slowly and managed over the time. If you lost some during a crisis like those we just talked about, you worked to rebuild it. Today, trust moves at the speed of attention. It can surge or collapse in a day, especially when people see a gap between what your organization says and what it does. Gee, we ought to maybe call that the say-do gap. That’s the essence of a new concept in reputation analytics. It is called trust velocity. The rate at which confidence in your organization rises or falls based on how closely your promises match your proof. Traditional reputation monitoring focuses on sentiment. whether audiences sound positive or negative, but sentiment can stay flat while risk is growing quietly in the background. For example, imagine your company saying, we’re committed to sustainability while social posts start showing waste problems in your supply chain or community leaders call out a lack of local follow through. Public tone might still sound neutral until suddenly it doesn’t. Trust velocity. picks up that tension early. It doesn’t just matter tone, it measures friction between words and evidence. And this idea aligns with what Edelman calls a trust inflection point in his 2025 AI trust imperative. A report that while focused on AI really applies to every business. Edelman found that people’s patience for vague or unverifiable claims of any kind has run out. The takeaway for communicators couldn’t be clear. If you can’t just say we’re respond… Sorry. You can’t just say, we’re responsible, we care about people or we act ethically. You have to show how. That means documenting your practices, explaining how decisions are made and being transparent about oversight. If there are limits or shortcomings, own them. If you’re learning, show what you’re doing to improve. And if a program or policy is falling short, pause it, fix it and communicate the fix. Because today responsibility without receipts is a reputation risk. Edelman also found that the wider public has a growing sense of grievance. 60 % of respondents worldwide say institutions, whether business, government or media, make their lives harder and serve narrow interests. When that’s the baseline, 60%, that’s a staggering number, trust isn’t a given. It has to be earned through consistent, visible follow through. Now the Pew Research Center adds another layer to this conversation. Their new global study on trust in AI regulation shows very different confidence levels by region. It says a majority of people in Europe trust the EU to handle oversight, but in the US, trust is split down the middle and far lower in many other parts of the world. And I have to throw in a caveat here. I’m not sure if the survey that was conducted by the Pew Research was just the countries of the EU or the broader European. community of nations. So when they talk about trusting the EU to handle oversight, I don’t know if it’s only people in EU countries. That wasn’t listed in that report. But this nevertheless has implications far beyond AI companies. It’s a signal that regulatory trust itself is fragmented. In some places, people believe in strong oversight. In others, they assume institutions can’t be trusted to keep companies accountable. So if you communicate globally, your trust narrative has to flex. In Europe and similar markets, emphasize alignment with recognized standards, transparency, independent audits, compliance with oversight frameworks. In the US, audiences tend to want proof, not promises, clear evidence, measurable safeguards, and third-party validation. Whatever the market, the principle is the same. You earn trust not by saying we’re compliant, but by showing the data, showing the decisions, and showing the people behind them. So how can communicators put this idea of trust velocity to work right now? Start by defining five to seven promises your organization truly makes. Safety, inclusion, sustainability, innovation, customer care. Then set up two lanes for each promise, what we say and what we do. The first lane is the messaging. That would be executive statements, campaigns, internal posts. The second is behavior. Metrics, policies, audits, employee feedback, supplier data, customer experience. When those two lanes start to diverge, your trust velocity drops. That’s your early warning. A simple way to track this is with what some organizations call an integrity board. One slide per promise. What we said, what happened, is trust rising steady or falling? What’s our next move? And when you close the loop, tell people, you said, we did, here’s proof. It’s a simple but powerful cycle. Listen, measure, act, communicate, repeat. The bigger point is that reputation management today isn’t just about messaging, it’s about evidence management. Transparency, data, and openness are now communication tools. So if you’re thinking about how to strengthen your organization’s credibility this quarter, here’s a concise way to put it. We measure ourselves by how fast we close the gaps between promise and proof, and we communicate those gaps Honestly, that’s trust velocity. And in this world where information moves instantly, it may be the single most important measurement of all.
Neville Hobson: Now, that’s very interesting, particularly the Pew study I found. It’s interesting the, excuse me, the overall recommendations in this topic. I think the thing that interests me is the governance bit more than anything, so it doesn’t backfire. And I’m wondering, I don’t see this mentioned in plans that touch on this topic, I suppose. Source of truth lists, I like that a lot. name which data counts as proof. So you don’t just bung a report in to back up without specifying what in the report is backing up what you’re saying. I see that a lot, by the way, that, you know, here’s his, his this and then there’s an attachment. That’s a big report without telling you what in it you need to pay attention to. So I think there’s food for thought here, without any doubt. And going on to the Pew one, this to me illustrates differences in Europe and the US. I’m using the word Europe deliberately to brace that whole area. it’s the EU, though the European Union has a particular, how could you say approach to trust. So you will hear reports and comments by people in positions of power and authority, but how much people trust the EU to look out for the interests of the citizens of the European Union. And that, frankly, gives you an indicator of one reason why the Brits are particularly not keen to be part of an institution like the European Union. We’re more of an independent mindset than you see elsewhere in continental Europe. But really interesting, difference, it does it, I guess, is a question really. Does this illustrate the political divide post Trump in the US versus Europe, generally speaking? So for instance, you’ve got the metric from Pew, Americans are split, 44 % trust the government’s regular AI versus 47%, you don’t. It’s almost the opposite of Europe or in the EU in the case of what Pew is reporting. A median. 53 % trust the EU to regulate air effectively. That’s quite significantly more than in the United States. So is that the nature of Americans? I think it might well be, Sheldrake from history, that not trusting the central government so much as they would, you know, their community or whatever it might be.
Shel Holtz: Yeah, I think you have both in play in this particular instance. I don’t think people trust the US government to regulate much of anything. It’s very, very slow to respond. It is very, very prone to influence by lobbyists. And it just doesn’t catch up with what’s going on in the country anywhere near fast enough. But in terms of the Trump administration and AI, their view is all gas, no break. We need to be on top. We need to be first. Don’t stop innovating. Don’t stop developing. Stop worrying about the potential risks and win. So if anybody thinks that this administration is going to lead an effort to regulate AI, think again. That regulation is going to happen state by state. California, Governor Newsom has recently signed legislation. He vetoed it last year and then they watered it down a little and he signed it, but it still put some guardrails up. This is not the optimum approach because then AI companies have to develop their products and their services to ensure that they don’t run afoul of 50 separate pieces of legislation. Much better if it was done centrally, but it’s not going to happen. And I think the people who say, yes, I trust the government to do this are probably by and large on the right side of the, and I mean political right, not correct, the political right side of the equation. And they like what this administration does, which is not regulating. They’re deregulating right and left. So is there a great deal of trust that the U.S. government is going to regulate AI? No, absolutely not. They’re not going to.
Neville Hobson: there. Yeah, that’s our landscape in that case. So basically, you’re on your own, I suppose you could say in terms of corporate approach to, to, you know, the implications for organizations trying to do business in various markets, whether it’s within the US or elsewhere, you can’t rely on your government, unless you happen to be in Europe, where you more likely to rely on the centralized state. The federal system in Europe is not like the US. There many similarities, of course, but there isn’t, for instance, an EU-wide defense structure with a defense, Secretary of State for Defense, or as you have it, war, as he is now. You don’t have any of that. It’s still national governments, but they defer to the central control of the European Commission and the European Parliament, which is one reason why the UK is opted out, which is a whole separate topic. recognizing the landscape, I suppose, is a key message to take away from this, pay attention to these developments, and make your plans accordingly.
Shel Holtz: Yeah, and it’s not just AI. Keep that in mind. Remember, we talked about things like sustainability. It’s the say-do gap. And I find it interesting that the Engage for Success organization in the UK around employee engagement lists the say-do gap, which they label as organizational integrity, as one of the four pillars of employee engagement. I think we need to spread that now and make it the basis of organizational trust. The company I work for, trust is one of the key things our leadership is working on. So we can’t be in a vacuum. I think a lot of organizations are looking at trust as the most important capital they have. How do you build it? Well, not by measuring sentiment. We have gotten very complacent with sentiment as the indicator of how people feel about us. We spend a lot of money on it. There are people out there selling it. Time to move on. I’m not saying ignore it, but we need to start looking at these other indicators as well.
Neville Hobson: Yeah, I agree with that 100%. Okay. So then, I guess our final story is a great one to end on Shell, I have to say this is a case study in real time marketing with a moral whiplash is how I’ve titled this. This many of you listening, probably most of you will know what this story is about the days after a Paris gang used a furniture lift to climb into the Louvre Museum in Paris and in less than 10 minutes make off with an estimated 88 million euros in Napoleonic jewelry. So the German manufacturer of that furniture lift, company called Burka, B-O with a normal out, C-K-E-R, the brand is the Agilo Furniture Lift, posted a tongue in cheek ad on Facebook and Instagram. The title of the ad or the headline was When You Need to Move Fast in German, with a copy boasting that the lift moves up to 400 kilograms at 42 meters per minute, as quiet as a whisper. The post went viral, about 1.7 million impressions versus the brand’s usual tens of thousands. Most reactions were amused. A vocal minority called it crass and insensitive to French sentiment. Berker says the lift had been sold years ago to a Paris rental company and was apparently stolen during a demo. The family-run firm admits it was shocked to see its product in the news. But once it was clear no one was hurt, they decided to make the most of the moment. Whether that generates real leads remains to be seen. But marketing chief Julia Schwabat said that after the news report, countless people, our staff, business partners, clients got in touch with us and we thought, wow, we have to do something with this. By Monday morning, the campaign was live. So is this clever, culturally aware nimbleness or opportunism that trivializes a crime and risks alienating stakeholders beyond the meme? The communication question is where you draw the line on humor, harm and brand voice when you are adjacent to a scandal rather than the cause of it. Is this a masterclass in agility, or a case study in playing with fire?
Shel Holtz: I think this is a master class in agility. David Meerman Scott would in a heartbeat call this news jacking. So I think the distinction between the kind of news jacking that David referenced in his great book on the topic and this is usually news jacking is when your company takes advantage of news to draw attention, not when the company involved in what you’re news jacking from. does it. So that’s a subtle distinction, but it’s essentially what they did. And I think they did it extremely well. I could see if they’d been caught, the thieves, and had used some other piece of equipment, they could have done something like they might have gotten away with it if they had used ours. There’s so much, again, as you say, nobody was hurt, which I think opens the door to having fun with this. mean, you know, I don’t think most people are viewing this particular crime as some horrifying crime. They’re thinking of it as something that you would watch a movie about, right? I’ve even seen a photo of some dapper dressed young man and there’s all kinds of discussion. about whether he’s the detective who’s going to solve the case or if he’s even real. So people are having tremendous fun with this. I don’t see why the company whose equipment was used by no action of their own can’t join in the fun.
Neville Hobson: No, in fact, we we reference back to our conversation about rage baiting, where they comment in there about if you can do something that is a bit of fun, and a paroxysm, positive response or reaction, why would you do the rage bait route? This is sorted in that kind of area. I think, you know, thinking about it, when I saw the, you know, when the when the the heist happened, it was news headlines on every single news channel over here. And I remember seeing the photo of the furniture lift parked outside the back of the Louvre, which was on a public street. And it was parked there. And I was thinking, what is that? Is that something the police are using or what? It wasn’t clear to me until it suddenly came out in the news reports. This is what the crooks used. And I thought, my God, it added an element of, you’ve got to be kidding me that this will happen. Where’s Inspector Cluso? Let’s get him on the phone. And I saw loads of loads of people posting that on Facebook, on threads, you name it, it’s called the Spectaclusa. And so it had that humorous element to it, which I think is exactly on target in this context of what happened. No one was hurt. There are elements of high amusement and comedy even to this. mean, the crooks, it was actually seven minutes they escaped. came down the lifts and of course the velocity that it does it at 42 meters per minute. They were down in no short order. They had motorbikes parked there that they all jumped on and sped off. One of the crooks dropped one of the objects that they’d stolen from the regime, which was a necklace or a crown. can’t remember what. It was one of Napoleon’s prized possessions. They got that element into it. They weren’t perfect, these guys, but they got away and no one knows where they are who they are even. Quite extraordinary.
Shel Holtz: It really is. And even the media is having fun with it. I think it was CNN, might’ve been the New York Times, they interviewed a bunch of thieves for their opinion on this. And the consensus was, had to be an inside joke. you know, who’s not having fun? Well, the Louvre’s not having fun with this. And the French police and the French government aren’t having fun with this. Pretty much everybody else is.
Neville Hobson: There is that. They’re not. I think there’s some serious elements emerged as I’ve heard and seen on the TV and in the press. For instance, the back of the building, this is one area where they had no CCTV cameras. That’s a fail. That’s a real fail. That is, know, the crooks knew that, the crooks knew that. So anyway, you got it.
Shel Holtz: Yeah. Well, again, inside job, right? But yeah, I think this is a reason to go get David’s book and read it and figure out what you need to do to be on top of these things so that you can take advantage of an opportunity when it arises.
Neville Hobson: So what’s the title of David’s book?
Shel Holtz: News jacking. Yep, that one. It’s very good. Yeah, my favorite story out of that was about the London Fire Brigade. I don’t remember exactly what the story was, but they ended up taking advantage of the story as a way to recruit women into the Fire Brigade. I don’t remember. what precipitated it, but they jumped on that story in order to offer this opportunity and they got good results out of it. So read the book, you’ll get that story. And that will take us to the end of this episode of For Immediate Release, episode number 486 for October 2025. Our next episode will drop on Monday, November 17th. I know that’s… earlier than usual, but Thanksgiving is coming in the U.S. and I’ll be tied up that weekend and I’ll be tied up the weekend before. I’m gonna go down to San Diego and visit my daughter and my granddaughter. So we’re gonna record in the middle of the month. So look for us on November 17th and remember to stick around for a minute and listen to that song because it’s pretty entertaining. And that will be a 30 for this episode of For Immediate Release.
The post FIR #486: Measuring Sentiment Won’t Help You Maintain Trust appeared first on FIR Podcast Network.
Things change fast in the digital world. On the other hand, business tactics can be slow to adapt. Crafting content with the intent of “going viral” has been part of the communication playbook for more than a decade. There was never a guaranteed approach to catching this lightning in a bottle, but that didn’t stop marketers and PR practitioners from trying.
That effort is increasingly futile, as the social media companies that host the content have altered their algorithms, and people are paying attention to different things these days. This has led several marketing influencers to suggest that it’s time to move on from the attempt to produce content specifically in the hopes that it will go viral. Neville and Shel share some data points and debate whether going viral should remain a communication goal in this short midweek episode.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, October 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson: Hi everyone, and welcome to For Immediate Release. This is episode 485. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz, and it is time to stop making going viral the point of our work. I’m not arguing that reach is irrelevant. I’m arguing that virality as an objective is a strategic dead end. High variance, low repeatability, and increasingly disconnected from outcomes that matter. I’ll explain right after this.
For years, viral success stories seduced communicators, and I’m among them. There’s a thrill in watching that graph spike, but we’ve learned a few hard truths. First, virality is unpredictable by design. Platforms tune feeds to maximize their goals, not yours. Second, even when you catch lightning in a bottle, the spike rarely results in any kind of durable advantage. A new peer-reviewed analysis of more than a thousand European news publishers on Facebook and YouTube, published in the journal Nature, found that most viral events do not significantly increase engagement and rarely lead to sustained growth. In other words, the sugar high fades, and it fades fast. Meanwhile, veterans of content-led link earning have publicly stepped away from virality as a North Star. Fractal, an agency that once made viral part of its brand, now says flatly, and I’m quoting, “We don’t care about viral marketing anymore, and neither should you.” Their pivot is toward durable metrics like authority, affinity, and relevance. You might think that’s a vibe shift, but it’s not. It’s a strategic correction. Even the classic research on viral ads, the eye tracking work that taught us how emotional arcs and brand cameos drive sharing, was never proof that you can plan a viral outcome, only that certain creative choices improve your odds at the margin. Helpful craft guidance? Yeah, sure. A basis for corporate OKR? That’s objectives and key results? Nope. Layer on platform dynamics and the case gets stronger. Meta’s shift away from news culminating in the shutdown of CrowdTangle, the very tool journalists used to see what was going viral, has reduced transparency and made spikes harder to both trigger and to verify. When the scoreboard moves behind a curtain, playing for highlight reel moments becomes folly. In some markets, we can literally watch viral news get deprioritized. In Australia, publishers report Facebook engagement at all time lows as memes and creator posts fill the feed. If the feed favors entertainment over information, it also favors retention over reach. Your viral playbook ages out fast in that environment. The New York Times captured the cultural angle. The internet that rewarded sudden mass attention is giving way to one that rewards depth: revisit rates, creator loyalty, community momentum. A share count trophy doesn’t impress the algorithm anymore. Sustained, meaningful engagement does. So what should replace viral as the goal? Let’s cover a few things. First, it’s designed for compounding attention, not explosive attention. Planned content is a series, not a stunt. Build episodic formats: office hours, ask me anythings, recurring data notes, anything that trains the audience to keep coming back. The scientific finding I studied earlier is the tell. Durable growth comes from consistency, not from lucky breaks. Second, shift your KPI set. Trade shares and views as headline metrics for return rate, session depth, qualified traffic, assisted pipeline, issue literacy, whatever truly maps to your business or reputation outcomes.
Neville Hobson: .
Shel Holtz: Fractal’s rationale for de-emphasizing virality in form of authority and affinities is a good model. Third, optimize for platform fit, not platform luck. Where audiences actually engage, optimize to the native behaviors that correlate with retention. A quick example outside our usual stomping grounds, science communities now see richer discussion on BlueSky than on X because the culture and mechanics favor constructive back and forth over dunking. Smaller network, higher quality signal. Build earned elasticity into distribution is the fourth tactic. Yeah, keep a line item for opportunistic amplification. Creator partnerships, timely collaborations, paid boosts that extend life for posts that deserve it. But treat amplification as gasoline for a fire you’re already tending, not a match you light and hope it sets the world on fire. Fifth, prepare for attention risk, not attention gain. The wider your message travels, the less control you have over how it’s interpreted. Your plan needs counter message, clarification assets, and issues response baked in. Meta’s data opacity only raises the bar for preparedness. So when does a viral goal still make sense? Well, there are edge cases, awareness blitzes for entertainment launches, urgent public interest alerts, or short run stunts designed to trigger specific behaviors.
Neville Hobson: Hmm
Shel Holtz: like getting signups during a defined window of time. Even then, the viral moment has to be tethered to a post-moment system, a next step path, a nurture stream, community onboarding, so the spike has somewhere to go. If you’re still writing, “go viral” on a brief, cross it out, replace it with create repeat engagement among the right people, increase qualified discovery, or raise message salience with priority stakeholders. Those are hard, unsexy verbs, but they’re the ones that move the work forward. And if you’re thinking, “we’ve always chased reach, why change now?” consider the evidence. The platforms don’t reward virality the way they used to. The data windows are narrowing and the best research we have shows spikes don’t stick. Plan for compounding attention. Let virality, if it arrives, be a bonus, not the business model.
Neville Hobson: Good assessment, Shel. I had to admit, I was thinking about the two words, viral marketing. And what came to my mind, as it’s not a topic I’m kind of thinking about every day, was what we saw a decade ago, which was spontaneous stuff that largely wasn’t planned, although much of it was planned, but didn’t really work. Things like, for instance, I reminded myself today and I looked at the video, the Chewbacca mom back in 2016, I think it was, that was a surprise hit. I mean, really, it was natural. It was spontaneous. It wasn’t planned. It was brilliant. It was wonderful, actually. But it perhaps illustrated the point you’ve just made, which because that wasn’t part of any kind of plan at all, it was spontaneous. So it didn’t have any kind of road to go down. It just corrupted and grabbed lots of attention. And that was it. The guy whose name I can’t recall nor his company, but who was interviewed on the BBC, sitting in his home office, business suit and tie and all that, and his two little children burst in. One was a little toddler and one was even younger, who came in on like a baby stroller. And then the maid came in behind to drag them out. And all the time the interviewer was asking the questions and he was saying, “I do apologize. I’m very sorry about this.” It was super. And that went viral. I think it was. Yeah, he was in lockdown. So he was doing that. So that that would end about 2020. So I think the, you know, the that time was one where these ideas were emerging, that the the kind of places where you could stimulate this weren’t as widely
Shel Holtz: That was during COVID, right? When he was doing the broadcast from home.
Neville Hobson: covered as they are today and there’s more of them today. But apart from that, the whole landscape has shifted and mindsets have shifted even. Knowing that this is going to be our topic today, I actually asked Google is something I do quite often, just a simple provocative question, is viral marketing still a thing? So Google search, was not GPT or perplexity, none of that. And the AI overview result was actually quite interesting. Yes, viral marketing is still a thing, says Google, but it’s evolved to be more sophisticated and integrated into social media strategies. While going viral is never guaranteed, it says, the core concept of creating engaging content that people want to share spontaneously remains a powerful tool for brand awareness and can be boosted by social media platforms like TikTok, Instagram and X. And that, I think, makes total sense to me, that qualifier of it, which we didn’t hear very much a decade ago when Chewbacca Mom and the BBC interview guy were the stars of the online world. But that to me always made sense that you don’t do something from a business point of view, just get it out there. Yet lots of people did precisely that without any seeming strategic approach to what was the outcome they were expecting of this other than to get thousands if not hundreds of thousands of people sharing it or commenting or liking it or whatever. What were you going to get from all of that? Where’s your strategic aspiration in that regard? That’s different now. And I did take a look at Fractal’s piece that was actually published back in 2021. That’s quite old. But nevertheless, I’ve seen others saying that’s it, viral marketing’s done. And I think it’s more nuanced than that. You need to qualify it. One thing I did like from the Nature report, the analytical report, almost academic report, Nature did, was defining, or they say, identifying two primary types of viral marketing, which I’ve not really heard many people talking about. They call it a loaded type virality, which manifests itself after a sustained growth period, growth phase, representing its final burst, followed by a decline in attention. The second one is the sudden type virality, with news emerging unexpectedly that reactivates the collective response process. And they talk about quick viral effects fade fast, while slower processes lead to more persistent growth. So this is now a far more evolved and attractive proposition to consider this. So I would say just based on my quick Google AI overviews read up and skimming parts of this Nature report that I think I don’t think it is time to drop going viral from the list of marketing. I’d say you need to prep yourself for the new way of doing this or a more effective way of doing this as part of your marketing strategy. Don’t you think?
Shel Holtz: No, I don’t. We’ll disagree on this because if you look at both of the examples you shared, the Chewbacca mom and the guy on the news.
Neville Hobson: But don’t forget that was 10 to 15 years ago.
Shel Holtz: Yes, it was, but neither of them were trying to go viral and neither of them were representing a brand that wanted to expand reach. They both just happened to catch the attention of people who said, “this is really fun or amazing. I’m going to share it.” And then other people shared it. I remember even at the height of the viral video craze, the agencies that said, “we can help you go viral,” basically were saying, “we think we have cracked the code on what makes things go viral. We are going to develop our videos so that they conform with those things. And maybe one out of 10 that we produce will go viral because there’s also some secret sauce that we haven’t been able to decode yet.” And now on top of the difficulty you had planning for something to go viral back then, now you have the algorithms that aren’t rewarding the kind of content that it did reward 10 years ago when those kinds of videos did go viral. I mean, think about Oreo 10 years ago, 11 years ago when I think it was their 100th anniversary and they had the hundred days of Oreo with different sort of memes about Oreos. And they would replace them with something unplanned when there was a breaking news event and they could come up with a way to design one of these images so that it was consistent with the news that was breaking. And a couple of those went viral. I don’t think they would today. I don’t think the algorithm would necessarily reward them the way they did back then. So I think if you create something that’s clever and entertaining and it happens to go viral, or even if you bake in some of the things that you hope will make it go viral, that’s fine. But I don’t think that should be your plan. I think your plan should be those other things that I referenced, getting people to come back, having repeat visitors and slowly building the audience, building the community. They’ll share, but it doesn’t need to go viral at that point in order to deliver the kind of ROI that marketers are looking for.
Neville Hobson: So I think the AI overview bits and pieces that Google popped up from a number of sources say all of that, actually. I mean, all those examples from a decade or more ago, I don’t think are relevant today because of the fact things have changed along the lines that you’ve said. But I think I like the way in which this summary talks about why viral marketing is still relevant, that actually says all the things that you’ve just said. Emotional and entertaining content, share worthy ideas, timely and relevant, audience engagement, that’s what you’ve got to aim for. They also talk about, those by the way, what the results of this search say are the key elements of modern viral marketing. That’s what you said. So whatever you call it, maybe the viral needs to be pushed away into the shadows a bit more, I think, because…
Shel Holtz: especially post-COVID,
Neville Hobson: Yeah, so, you know, I still hear people not much, I have to admit, but I saw someone writing about it on, I think it was LinkedIn, about creating a viral video. And thinking that you’ve never been able to create a viral video, you’ve been able to create a video that may go viral, but people tend to not quite see that point of view. So what’s changed that I say, using the video as an example, you create a piece of content that you get amplified through social media and set aside algorithms for a second. That’s emotional and very entertaining. Lots of people like it. And it’s part of your plan of other things you’re doing around product X or whatever it might be. And platforms today make it easier than they were even 10 years ago for content to spread quickly. As Google AI says, acting as a force multiplier for marketing efforts. So I think there’s still mileage in looking at this method of, let’s call it viral marketing. It still has legs in my view, if you approach it the right way and create it or look at it in terms of what you describe, which fits exactly what Google’s AI overview says. This is not about Chewbacca mom or BBC interview guy where it was spontaneous. People thought it was terrific and they shared it. That was it basically. It still shows up, by the way, when you do and the BBC interview guy says he’s indelibly got in his resume now, the BBC interview guy, as it’s called. So people are you without one, right? So I think it is worth considering it. Maybe don’t call it viral marketing. This is really part of your overall marketing strategy that you employ all the things you mentioned and all the stuff that’s mentioned here to. And indeed all the stuff that the Nature report mentioned too about the two types of viral marketing. So I think it’s got legs still.
Shel Holtz: In terms of brands, I’m trying to think of something that went viral in the last few years and something that they wanted to go viral. There’s been plenty that has been spread around that I’m sure they wished hadn’t gone viral. But I can’t think of a marketing effort that has taken that path in the last few years. I just can’t think of one. So I’d rather put my energy elsewhere.
Neville Hobson: Maybe they’re not doing it. Yeah, they’re not doing it the way they did it before then. So they’re doing it, they’re calling it something else.
Shel Holtz: I remember there was one, I remember reading an interview, I think we talked about this on the show at the time, even though it’s probably a decade ago. He spent money to have a video go viral. That was what the agency planned and it didn’t. And he was upset until he started getting a lot of lead generation response. It turned out that it had only been seen by about 1,500 people, but they were the right 1,500 people, because of the tags, was a very technical video, and it actually led to return on investment. I remember somebody saying, “having a hundred people see your video is a win if your target audience was the U.S. which has a hundred members.” So how much reach do you want? You want to reach the right people. You don’t want to reach everybody who’s not going to buy your product or spend money on your service. I’d rather target, I’d rather build an audience. I’d rather focus on episodes and recurring content that brings people back over and over and over again. Maybe one of those will go viral and if it does, it’s a win.
Neville Hobson: Well, I think you’ve actually then differentiated that quite well, because the example you gave of the agency who tried to plan the viral video or plan the video to go viral, I don’t know quite, I’m not quite sure how you would do that, to be honest. Is the issue there? What you described in terms of the actual outcome, which sounds like it was accidental, they certainly didn’t plan that. But again, that’s what 10 years ago, I think you mentioned. So now I’m pretty sure that that would not be the case. It would not be done that way at all. They would, I would imagine if they were doing this now, would be definitely well structured, well thought through as part of their marketing activity. And maybe the viral wouldn’t appear anyway.
Shel Holtz: could be, but if anybody has planned for something to go viral out there and had success with it, drop us a line. We’d love to know about it. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #485: Is It Time to Stop Trying to “Go Viral”? appeared first on FIR Podcast Network.
Hollywood erupted in debate and discourse when a company unveiled a completely AI actress, Tilly Norwood. The public relations industry may be having its own Tilly Norwood moment with the introduction of Olivia Brown, a 100% AI PR agent that will handle all the steps of producing, distributing, and following up on a press release. Is this PR’s future, or just part of it? Neville and Shel engage in their own debate in this short midweek FIR episode.
Links from this episode:
The next monthly, long-form episode of FIR will drop on Monday, October 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. Shel has started a metaverse-focused Flipboard magazine. You can catch up with both co-hosts on [Neville’s blog](https://www.nevillehobson.io/) and [Shel’s blog](https://holtz.com/blog/).
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz: Hi everybody, and welcome to episode number 484 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. A new name is stirring debate in UK public relations. Not a person, but an AI agent called Olivia Brown. Launched by Search Intelligence, an SEO and digital PR agency, Olivia Brown promises to automate the entire PR process from brainstorming ideas to writing press releases, identifying journalists, and even following up with them automatically. For £250 a month, it’s marketed as a digital PR assistant that can cut campaign time from 16 hours to one, according to its founder, Ferry Casone. Is this the future of PR? We’ll explore that question right after this.
Journalists and PR professionals are sounding alarms about Olivia Brown. Press Gazette reports that Olivia Brown has already been flooding inboxes with AI-generated press releases, complete with invented expert quotes and relentless follow-ups, all camouflaged to evade AI content detectors. Alastair McCapra, the CEO of the CIPR, calls this a threat to the very foundations of the profession, arguing that instant automation erodes judgment, relevance, and trust—the cornerstones of ethical communication.
Dominic Pollard at City Road Communications goes further, saying this kind of technology flips PR upside down. Instead of starting with a genuine story, it fabricates one designed to match a publication’s existing output—what he calls coverage for coverage’s sake. Supporters, meanwhile, frame Olivia Brown as an amplifier of authenticity, not a diluter of it. On LinkedIn, Cassone describes it as a tool that frees up time for creative thinking while improving productivity.
Beneath the surface, Olivia Brown isn’t just about automation. It forces us to confront a deeper issue: when AI can generate stories, quotes, and even relationships on an industrial scale, where does that leave trust, the single most valuable currency in our profession? Let’s unpack what this means for communicators, for journalists, and for the fragile relationship between authenticity and efficiency in the age of AI-driven PR. I’ll start with this question: Is this the future of public relations or the beginning of its undoing?
Shel Holtz: Are you asking me? Yeah, it’s somewhere in between, I think. This does not trouble me very much. This is a situation where you have a tool that primarily cranks out press releases. It does some work preceding that, but ultimately it’s about cranking out press releases. We have discussions happening—you and I have discussed these conversations in previous episodes—about whether the press release is dead or not. I happen to believe it is not. But there is far more to public relations than press releases.
This is touted as a tool for the agency, not for the client. It’s my understanding based on the little I have read of this that you pay £250 a month to the agency for them to use this on your behalf. It’s not an interface that’s available to the client. Is that right?
Neville Hobson: No—the whole website area, the whole dashboard thing—it’s way beyond just press releases, really it is, according to what they say themselves.
Shel Holtz: OK, because all I’ve read is this LinkedIn post, and it makes it sound like it’s an augmentation of what the agency does. Either way, I’m largely untroubled by this. Some people will use it. Some people will recognize the value that you get out of having a human PR agency doing your public relations work for you.
I don’t know how adept generative AI is at this point at building relationships over the long term. It seems to me that outside the memory that some of the large language models have that recall previous conversations, you don’t get the benefit of having worked with someone over time and getting to know them and getting to know the issues and the challenges and the triggers and these types of things. But some people who maybe have less of a budget will use it. It could be a gateway, in fact, from using an AI public relations agency, if you will, to working with a broader mixed group of humans who are using AI in their work.
But the fact is that we have some data that can support this. First of all, research has found that most journalists are untroubled by PR people using AI. There was one from Cision this year that showed that a majority of journalists are not strongly opposed to AI-generated pitches—about 27% are strongly opposed. Concerns that arise are around factual errors, but that’s on the humans in the agency to address.
There is other research that says a quarter of press releases that are being distributed right now are already written by AI. So we’re going to see press releases written by AI. We’re going to see PR professionals using AI to help them in their work. And then you have Sam Altman, the CEO of OpenAI, joining a chorus of voices who anticipate that there will be billion-dollar companies run by AI with no humans in them at all. I don’t know how soon that’s coming, but I believe that it’s likely at some point.
I don’t see this as an authenticity challenge as long as there’s a human reviewing what’s going out before it goes out—whether it’s a PR person or the client. This is going to happen. But is it going to replace traditional public relations? I don’t see that. This is that old Mitch Joel line of “along with,” not “instead of.”
Neville Hobson: Okay, so you don’t feel uncomfortable about the fact it makes up people, makes up quotes, fabricates stories and all that.
Shel Holtz: This is a problem. No, I don’t feel comfortable with that. That’s something that has to be addressed. I can’t believe that they actually released it.
Neville Hobson: That’s at the heart of their service. That’s what they do. I’m reading Ferri Cassoni’s post on LinkedIn where he gives the steps of a typical “what happens.” Client joins the agency—meaning him. The PR exec starts a new campaign with Olivia. It’s all about how much time all this takes, but Olivia comes up with 100 ideas in five minutes. A PR executive selects five of those ideas.
The PR exec goes to the client and asks for expert tips for the five campaigns. The PR exec copy-pastes the expert tips into Olivia. It writes up five press releases with the client’s tips in five minutes. PR exec double-checks the content, tweaks it, and adds their own sprinkle to it if they want to. Once the release is written, Olivia scans 30,000 news articles on news outlets likely to pick up the story and identifies hundreds of journalists for each story separately with high accuracy, creating a personalized, on-the-fly media list—hours saved up to 20, it says.
Olivia sends the expert tips via email to all those hundreds of contacts for all five stories. Olivia follows up in two days to see if journalists need any extra info regarding the stories—hours saved, one. And it doesn’t say this, but others have commented, then relentlessly keeps emailing all those journalists: “Have you read it yet? What do you think? Are you going to run it?” Without Olivia, all this would take 16 hours, and with Olivia, it takes one hour.
I mean, come on. This is crazy, in my view. If this is the future of PR, I do not want to be part of this, I tell you. You’re right—we’ve seen automated press release stories before. I’m not sure we reported it on FIR, but I wrote a blog post about this podcasting tool that’s industrial scale—creating podcasts completely run with AI presenters and so forth. I think I read it’s 3,000 shows a week. It blasts them all out, and it’s already all over Spotify.
So that is part of our future, I suppose. Is it the future we think it should be? I’m not sure. It troubles me hugely, Shel. I have to tell you that this is out there because many people—you know this as well as I do—are just going to take what it does and blast it out. That’s most of the criticism I’m seeing. So I’d love to hear from a client or someone who is reputable using this, saying what experience they got and how they feel about it. But right now, it seems to me this surely cannot be the future of public relations.
Shel Holtz: And like I say, it’s not the future of public relations—it’s part of the future of public relations. Right now, as I have said probably 500 times on this podcast, anybody can hang up a shingle that says “public relations” and do work that they claim is sound, professional public relations work with absolutely no background in it. We see press releases written by humans—and we know they were written by humans because we saw them before AI was around—that are just awful.
This is just another way to produce awful press releases, but it’s also a way to speed up a process with people in it. So yeah, you’re going to get slop. You’re going to get really bad stuff. You’re getting that anyway from the industry. If professionals can use this to speed up the process and give better outputs on a quicker schedule to clients—with that human intervention of checking the facts—and I’m not necessarily saying it’s this one, this Olivia Brown, but something like it that does a better job of research and fact-checking…
I mean, Ethan Mollick just posted over the weekend that people claim, “I use ChatGPT-5 and it makes stuff up.” And he says, not if you use ChatGPT Thinking—it makes up far less. But they don’t know to go in and do that. So we need a tool like Olivia Brown that knows how to basically fact-check and think about what it’s doing rather than just spew the first thing that comes into its digital mind.
But this is not the be-all and end-all of PR. Even looking at this entire process outlined in this LinkedIn post, it’s still ultimately about sending out press releases and checking with the reporters. You know that there is public relations work that goes far beyond press releases. There’s PR work that never involves sending out a press release. There’s crisis communication. There’s reputation planning and the kind of work that we do around building or shifting reputation.
I always like to remember the case—I think it was Burson-Marsteller working with, I believe, StarKist Tuna—that was caught up in the boycott of canned tuna as a result of the dolphins that were being caught in the nets. The activists who were looking for a change in the rules on how they caught tuna initiated this boycott. StarKist already had policies around dolphin catches. They were already on the side of the activists, but their product was caught up in this.
Working with Burson-Marsteller, they were able to get the word to the activists in a negotiation with both parties at the table in the same room—could be done over Zoom now, I suppose. They came to the conclusion that, “You’re the good guys,” so the boycott was amended to say “except StarKist.” There wasn’t a single press release involved there. And Olivia Brown is not configured to do any of that.
So I don’t think traditional public relations is going anywhere. This is just a further enhancement of the automated press release concept. We interviewed Aaron Kwittken—he’s doing that with Profit. All this does is add the ideation at the front end and the journalist contact at the back end. Other than that, it’s the same product. I just don’t see it as that big a deal.
Neville Hobson: But it makes up people; it makes up facts. That’s at the heart of it. This is what’s here now, and that’s what it’s doing.
Shel Holtz: Olivia Brown—this particular product—is not good. The concept is fine. That’s what I’m saying.
Neville Hobson: I’m not talking about the concept. The concept is this—this is what’s on the market now. The point is to talk about this in the context of what’s happening right now. It’s attracting a lot of comments here in the UK—maybe it hasn’t hit the US radars yet—but most comments I’m seeing are very critical.
Shel Holtz: If it makes up people, it’s problematic. It’s the first shot out of the gate.
Neville Hobson: This is not a good thing at all. The notion of this—industrializing PR for coverage’s sake—automates the entirety of it, including the follow-up to journalists, the relentless pursuit of answers. What about your reputation when people start thinking, “I don’t like these people; they keep hitting me up with all this stuff”? Is this the answer to your dreams to get that coverage you’re after? And never mind how you do it—the means justify the end, right?
Shel Holtz: Well, this is version 1.0 of Olivia Brown, right? They’re going to get feedback and presumably come out with 1.5 that addresses some of these things. I’m not defending Olivia Brown.
Neville Hobson: I wouldn’t hold your breath on that, Shel. I wouldn’t.
Shel Holtz: You’re going to have press releases with fabrications get called out, and people are going to point to Olivia Brown as the source of the releases that led to that coverage. You’re going to have clients who are going to be upset when they’re called on the carpet for inaccurate or completely fabricated content. Either this thing is going to be a dramatic failure as clients publicly turn on it, or they’re going to make tweaks and improvements—the way most software is improved.
Either way, you’re going to see other people look at this and go, “I can do this better.” And we’ll see more of these. We’ll see some that start to take on other elements of public relations. PR is going to be AI–human hybrid—there’s no question about it. And the workflow that’s outlined in the LinkedIn post seems ripe for AI augmentation. But they’ve got to fix the problems with this. No question. It’s unacceptable to be fabricating in public relations. Accuracy matters.
Neville Hobson: I don’t see any signs whatsoever that that’s on the radar to do. This is the product. Your point about AI and automation—I don’t disagree with that at all; that’s what we’re going to see. But this, though, is a whole different thing, it seems to me.
Shel Holtz: We should ask them. Let’s interview their CEO.
Neville Hobson: Yeah, maybe. I’d like to find out what others say about them—if they’re using them—to see if there’s anything worth talking about. Is this so revolutionary that we would want to do that? So anyone listening who uses Olivia Brown and would like to share their experience, do get in touch. We’d love to hear it.
Shel Holtz: [email protected]. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #484: Is Olivia Brown the Tilly Norwood of PR? appeared first on FIR Podcast Network.