For Immediate Release

Shel Holtz

In addition to news items and in-depth discussion of trends and issues, you'll hear the Internet Society's Dan York report on technologies of interest to communicators and Singapore-based professor Michael Netzley explore communications in Asia.

  • 25 minutes 54 seconds
    FIR #510: Should Companies Embrace Shadow AI?

    Employees have long found ways to use software tools to get the job done, even when those tools are not approved. It’s called Shadow IT, but ever since generative Artificial Intelligence hit the scene in 2022, employees have adopted a new version: Shadow AI. The company approves Microsoft Co-Pilot, but employees opt to use their smartphones or personal laptops, along with their personal accounts with ChatGPT, Gemini, Claude, Midjourney, or whatever best suits their needs.

    For most companies, this is a problem that needs to be addressed through repeated policy announcements and vigorous crackdowns. One company, though, took a different approach. In this short, midweek FIR episode, Neville and Shel outline what the company did and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Shel Holtz: Hi everybody, and welcome to episode number 510 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson. There’s a quiet tension playing out inside many organizations right now. On one side you have leadership teams, IT, legal, and compliance, all trying to put structure, governance, and control around how artificial intelligence is used at work. On the other side you have employees who’ve already moved on. They’re not waiting for official tools. They’re not sitting through pilot programs. They’re not asking permission. They’re opening ChatGPT on their phones. They’re using Claude in a browser tab. They’re experimenting quietly, often invisibly, finding ways to make their work faster, easier, and sometimes better. And in many organizations, this shadow AI behavior is still being treated as a problem — something to restrict, monitor, or shut down. It’s a topic Shel and I discussed on this very podcast in episode 419 nearly two years ago, and it hasn’t gone away.

    Neville Hobson: In fact, recent data suggests it’s accelerating. A study last November by Blackfog and Sapio Research found that nearly half of employees surveyed in the UK and US are using unsanctioned AI tools. Even more striking, 60% said they would take security risks with those tools if it meant meeting a deadline. So this isn’t fringe behavior — it’s become normal. An article in the Harvard Business Review this month argues that instead of treating unauthorized AI use as a compliance issue, organizations should see it as a signal — a sign that people are already finding value in these tools, even if the organization hasn’t caught up. We’ll explore that idea in just a moment.

    Neville Hobson: The article calls this the hidden demand for AI inside your company. And when you look at it through that lens, the picture changes quite dramatically. Because instead of asking, “How do we stop this?” you start asking, “What are we missing?” The piece goes further than theory. It looks at what one organization actually did when it recognized this dynamic: BBVA, a Spanish multinational financial services company with more than 125,000 employees. Rather than clamping down on shadow AI use, they moved quickly to provide a secure enterprise environment. But more importantly, they didn’t try to control everything from the center. They took a different approach. They identified and empowered what they call “champions” and “wizards” — the people already experimenting, already curious, already building things. They created a network, a community of practice, a way for ideas, use cases, and practical solutions to spread peer to peer across the organization.

    Neville Hobson: And the results, at least as reported, are striking: thousands of employees actively using AI tools, thousands of internally created applications, and measurable time savings of hours per person every week. But perhaps the most interesting part isn’t the numbers — it’s the philosophy behind it. The idea that successful AI adoption doesn’t start with a perfectly designed top-down strategy. It starts by recognizing that innovation is already happening, just not where leadership expects it. So the question becomes: do you try to control that energy, or do you find a way to harness it? And that opens up a much broader conversation, one that goes well beyond technology. It touches on leadership, trust, and culture — on how change actually happens inside organizations. And, importantly for communicators, on how you surface, legitimize, and guide behavior that may already be happening under the radar.

    Neville Hobson: Because if employees are already using these tools — and most evidence suggests they are — then silence or restriction alone isn’t really a strategy; it’s a gap. So in this conversation, we want to explore that gap. What shadow AI really tells us about organizations today, whether the BBVA approach is something others can realistically replicate, and where the risks still sit, because they have not disappeared. And we should be clear: BBVA may be an outlier. It’s a highly data-mature organization with strong leadership alignment. Many organizations don’t have that foundation. So the question isn’t just whether this works — it’s whether it can work anywhere else. And what that means for the future of work, and for the role communicators play in shaping that future. Shel?

    Shel Holtz: Well, a few thoughts, starting with the fact that BBVA has the financial resources to provide a secure environment for those tools that employees are using. There are many organizations whose IT budgets are razor thin and don’t have those resources, so they would need to figure something else out. But I think there’s a caution here worth raising. The numbers from Blackfog are real, even if the framing from the Harvard Business Review is optimistic: 34% of employees using free versions of tools when paid, approved versions exist; 58% of unsanctioned users on free tiers with no enterprise protections. The reframing from threat to signal doesn’t eliminate the exfiltration risk — it reframes how we need to respond to it.

    Shel Holtz: Communicators should be careful not to let the BBVA-style narrative become an excuse to ignore governance. The right frame is: harness the demand, don’t suppress it, and build the governance at the same time. Employees using unsanctioned tools and putting secure data and company information into them — that’s a governance risk, and I don’t think we can ignore it. I mean, I think what BBVA did is great, and I think they baked it into some governance while looking at a new approach they could afford to take. But for many organizations, governance is still a requirement.

    Neville Hobson: Well, I agree. It’s important and it’s not to ignore by any means. I think, Shel, you fleshed out a little bit the survey that I mentioned, which is actually useful to have that level of detail. But the big question for me is: if this is the picture in many organizations, according to that survey — compared to data previously — this is getting worse, or rather, it’s happening more frequently. People are just going ahead and using what works for them as opposed to what’s the official thing. What is that a symptom of? Maybe a lack of trust? It’s probably a mix of things. And to me, the communicator’s role here seems to be to try and help people on the one hand understand what the tools can do for them, and on the other hand to help the organization understand that we need to address this issue. People aren’t using the approved ones. They’re doing stuff on their own, and that isn’t good.

    Neville Hobson: You mentioned security risks. The Harvard article goes into some detail about that, as indeed do the people who conducted that survey. You can just picture the severe risk. We’ve seen examples in recent months of organizations that have suffered from unauthorized use of unapproved software tools — not necessarily generative AI tools, but software certainly. And it’s a big deal. So the question — do you try to control all this and look at ways to stop it? — we asked this very question two years ago in our conversation, and we could probably just insert the recording from then and replay the answer. But let’s talk about it. I don’t think they should try and stop it personally. That’s a fail. There’s no win in that at all, certainly not for the organization. So how would communicators go about that, do you think?

    Shel Holtz: Well, I’m not suggesting that organizations crack down on this and become Big Brother, looking at the tools that people are using, especially when they’re using them on their personal phones or personal laptops. But there are definitely things communicators can do. The first is to surface and amplify the internal use cases — not just the fact that people are using these tools, but what they’re using them for. When the security people and the legal people find out that this is actually driving effective work product from these employees, I think there might be more appetite for figuring out a way to bake this into the governance documents and policies the organization has established.

    Shel Holtz: And I think giving employees permission narratives — telling them it’s safe to experiment, letting them know how to do it, and suggesting where the guardrails are — matters. So if you are using shadow AI, here are the things to be careful about. Let them know what the risks are and how to avoid falling into those traps. Communicators can also translate the IT and legal guardrails into plain language that doesn’t read as prohibition, because prohibition just leads to negative thoughts from employees about the organization, and then they’ll just continue using what they’re using. And then there’s collecting and routing the demand signal back to leadership. Why are employees using these when there are approved tools around? What are the advantages? So that leadership can make investment decisions that match the patterns of usage employees are actually engaged in. There’s a lot of work here for communicators that goes beyond simply saying, “Don’t do this.”

    Neville Hobson: Agreed. And in fact, you can learn from much of what BBVA did, even if you’re not an organization with that established foundation and 125,000 employees. They did things most companies aren’t going to be able to do. For instance, they reached an agreement with OpenAI and deployed a customized version of ChatGPT Enterprise in a secured, exclusive cloud just for the company. The reasoning is interesting. What the Harvard Business Review report says is that the strategic decision was clear: it was more dangerous to have unmanaged, hidden AI usage than to rapidly deploy a managed, secure solution that aligns with existing needs. Most companies aren’t going to be able to do that. So it comes back to perhaps what you’ve just proposed — explaining it to people, the pros and cons, the risks, and so forth.

    Neville Hobson: But I think you need more than that, too. Otherwise, you’re going to have significant numbers of people who will ignore it and just go ahead anyway with what they’ve been doing. So maybe elements of what BBVA did — for instance, the network of internal champions and expert wizards to spread knowledge, rather than the formal top-down communication you might expect. You’d have people within the organization who are knowledgeable, who have a history of responsible use themselves, who can help explain to others and help them replicate that. You end up, I think, with steps toward broad compliance that everyone can buy into. That would be helpful, because I can see that the idea of anyone in an organization of whatever size just doing their own thing with whatever tool they like is not a good idea at all.

    Neville Hobson: And that isn’t unique to this. We’ve had that kind of conversation in decades past about software. I remember when Hotmail first came out, and when Microsoft Network first came out, the arguments in organizations — and indeed the one I worked for at the time — was, “You’re not allowed to use this on your company laptop, so use it on your own,” stuff like that. That’s definitely not a good thing. So you need to act to address issues like that so that people trust you and respect you and are willing to follow a restriction — or a behavior change, if you like — that would help. It’s interesting, the learning you can get from BBVA’s example, even though you’re not an organization that size with a budget to match. It’s a lot about education. It’s trusting employees, absolutely, as you pointed out, Shel. But I think that’s a two-way street. You need to have a quid pro quo: if you have these freedoms to use whatever you want, you need to do it responsibly. Share your learnings with others in the organization. Things like that. To me, that seems like a really good place to communicate.

    Shel Holtz: Yeah, there’s communication happening at BBVA. They have 11,000 active users and 4,800 custom tools being used by those folks. That didn’t happen because the communications department posted an article about them. This was peers talking to their peers about what was working. It validates something you and I have been talking about for years, which is that authentic, lateral, employee-to-employee storytelling beats top-down cascades every time.

    Neville Hobson: Precisely.

    Shel Holtz: But it is communication. And why wouldn’t that be something the internal communications department jumped on and helped to facilitate — providing the channels for that, rather than the sneakernet that’s probably happening now? And also, because they’re engaged and trying to keep this from happening below the surface, they’re in a position to identify the use cases worth taking to leadership. The Blackfog survey you referenced found that almost 70% of C-suite executives believe speed is more important than privacy or security. So if people are getting things done faster — if you can demonstrate that there actually is productivity improvement happening, and it’s because of the tools employees are using that aren’t approved — I think that’s motivation for leadership to look at either approving those tools or finding ways to allow people to use their own accounts while protecting the integrity of their data.

    Neville Hobson: Yeah. The results the Harvard Business Review reports from BBVA are worth noting, even though the scale isn’t what many companies would experience. They talk about 80% of usage of the system they set up coming through direct chat prompting, and the remaining 20% through employee-created GPTs. Now, this is not shadow AI — it was part of the rollout of what they did. But these numbers are quite impressive. Over 83% of employees now use the system every week, averaging 50 prompts per week. That’s above comparable enterprise deployments, says the review, quoting OpenAI. Users report average time savings of two to five hours per week — a number worth noting. More than 4,800 custom GPTs have been created internally, and they’re used three times more frequently than the enterprise average. So they’re ahead of the game in that regard. The article goes into more detail about which departments are more active than others, and so forth.

    Neville Hobson: It also prompted a thought in my mind: the other surveys I’ve seen and other reporting on the resistance from leadership in organizations — that isn’t minor. It’s not a little thing. It happens, unfortunately, too frequently. I’m thinking of keystroke logging on employee usage, auditing computers surreptitiously and covertly without telling them, watching which apps they’ve installed — and indeed, probably more common, your company laptop refusing to install things that aren’t on an approved list, or reporting to IT that you tried to install stuff. This is a dreadful situation in organizations. It’s common, but we’re going to see more of it, I think, because that seems to be the way of the world these days on distrust. This is a diminished-trust environment we’re talking about. So in all of that, where do we sit in terms of enabling stuff like this? We can see the advantages of allowing employees to use tools like this. I think the better way is to try to do something within the framework of the organization — not, “Oh sure, go ahead and use ChatGPT whenever you want on any device, no big deal.” I wouldn’t be keen on that. I wouldn’t stop it, but I would look at ways of weaning people off that approach. We have to help them and encourage them to do this. And that, I suspect, is a hard task for communicators — to persuade leadership to do that if the climate in an organization is resistant to it anyway.

    Shel Holtz: Well, I think it is a hard sell to leadership, but we have data. We’re supposed to be engaging in two-way communication and facilitating two-way communication. One of the roles of internal comms is listening. And it doesn’t have to be through direct information that you get from people through focus groups or surveys — it could be this Blackfog survey. When 49% of employees are using unsanctioned tools, and 63% think that’s fine as long as there’s no approved option for what they want to do, you may look at that as rogue behavior, but you can also look at it as market research. And communicators are the people in the best position to translate that data into something actionable for leadership. You take that to leadership and say, “Look, this is what’s happening. We’re the ones who can interpret what the behavior means and pass that along to leadership.”

    Shel Holtz: I think part of our role is that listening through the data that’s already out there — and maybe what we can determine is going on in our own organizations — and taking that to leadership and saying, “Look, this isn’t going to go away if you crack down on it. It’s not going to go away if you block installation on company laptops. People have their own phones. People have their own laptops and tablets. This is going to continue.” And this isn’t new. I mean, this goes back to the earliest days of computers. I think I’ve mentioned this once or twice on the show, but I needed to produce charts and graphs in the mid-‘80s, and I wanted to use Harvard Graphics because somebody had shown it to me and it was what worked, and the company had a different program that was terrible. So I just used Harvard Graphics. I bought my own copy and installed it. There were no blocks back then — you put the floppy disks in the drives and it installed. People are going to do what they need to do to get the job done. Maybe some will pay attention to what the official rules are, but I think the governance needs to be flexible enough to adapt to this. I applaud BBVA for what they did. Again, I don’t think every organization is in a position to replicate it, but I think you can take lessons from what they did.

    Neville Hobson: You can. Not everyone can roll out what they rolled out — enterprise licenses and so forth — but some of the things they went about, and how they went about them, definitely. One thing the review article points out quite strongly — a very, very good thing — is that they say, toward the end of their conclusions, that in whatever you do, there must be a hard human-in-the-loop rule. Human employees should always own the work. There should not be direct writes to core systems. Internal GPTs need quality scores and guardrails. They specify scope and context, include samples, and so on. This is simple, scalable, and non-bureaucratic.

    Neville Hobson: So that’s something that kind of ties back into this emerging phrase — if it’s even emerging — of human-centered AI. Let’s look carefully at this. It’s about people first, technology second, and the human needs to be in the loop. The “hard human,” as the review calls it — I interpret that as meaning someone who’s actually cognizant, aware, and able to act upon things that matter, to keep humans in the loop, to own the work, not the technology. You’ve got to think about things like that. And I think for communicators, that’s an important aspect of what they do — having in mind that element that is about the people first. So when you’re trying to persuade leaders to take a course of action you’re recommending, this needs to be in your mind too: that the humans need to be in control.

    Neville Hobson: I have to say, this is great. I love stories and examples like this. I love them more than the ones that talk about disasters, although those are useful to know about as well. Yet I feel, as communicators, we have a constant, constant task on our hands to explain this to people in organizations, to help others understand. I think this is a good example — the shadow AI element. For me, if I were actively involved in an organization as the communications person, I’d be looking at: how do I persuade people not to do that? How do I persuade people to use the approved stuff? But at the same time, how do I persuade the leaders to make sure they offer employees stuff that actually works, that’s in line with their expectations, all that kind of stuff? There’s a bit of a job on their hands. And if budgets get in the way, then you’ve got an even harder job. But hey, that’s what we’re here for. That’s part of what we have to do.

    Neville Hobson: These are good examples you can learn from. There are elements you could start on. And I think, like most things, Shel, you need to say, “OK, fine — this idea has a dozen constituent elements, and let’s just start with two.” So you don’t try to think, “Oh my god, this is a massive project. How on earth can we do this?” You look at just a couple of things. I like another point the Harvard Business Review makes: ensure that managers know what they’re doing. You can’t expect managers to be persuasive in encouraging others to use AI if they’re not good at it themselves. So there’s another element — you need to train them well, says the Harvard Review. At a minimum, they should learn how to write staffing notes, sensitive communications, and KPI reviews with AI help. So there are some things you could do straight away as a communicator in an organization. I’d say: good luck and godspeed, and it’ll all work out in the end.

    Shel Holtz: Yeah, a manager’s role in all of this is probably an episode in its own right. I would just reiterate the point you made about the human in the loop. This is a governance element that should be overarching — not applying just to shadow AI, but to all use of AI in the organization. It should be a primary consideration in governance, not to turn things over to AI. Otherwise, you end up with fake citations going out to clients that paid a million dollars for your work — another little slap on Deloitte’s wrists. And that will be a 30 for this episode of For Immediate Release.

     

    The post FIR #510: Should Companies Embrace Shadow AI? appeared first on FIR Podcast Network.

    21 April 2026, 12:54 am
  • 20 minutes 49 seconds
    FIR #509: Does Corporate Content Need Copyright Protection?

    When bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content — and win, at least initially. That’s because the systems platforms used to validate copyright claims are automated and configured to treat whoever files first as the rightful holder. The result: musicians like Murphy Campbell, a folk artist from North Carolina, lose both revenue and control of their own creative identity.

    The same mechanism works just as well against any organization that publishes audio or video content online. In this midweek episode, Shel Holtz and Neville Hobson break down how the scam works, why it matters to communicators, and what you should be doing right now — before an incident forces your hand.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Neville Hobson: Hi everyone and welcome to For Immediate Release, this is episode 509. I’m Neville Hobson.

    Shel Holtz: And I’m Shel Holtz. And today we’re going to talk about something else that communicators need to worry about. I think we need to develop a worry list for communicators. This one starts with a tale about a folk singer from the mountains of Western North Carolina. She’s named Murphy Campbell. She plays banjo and dulcimer and records old Appalachian ballads, some of them written by her own distant relatives. And she posts videos of herself performing in the woods. She has about 7,800 monthly listeners on Spotify. And she is, as Shelly Palmer put it in a recent column, exactly the kind of artist the copyright system was designed to protect.

    In January, some of her fans started messaging her about songs on her Spotify profile that she had never uploaded. Someone would have taken her YouTube performances, run them through AI voice cloning tools, and posted synthetic versions of her songs under her name on streaming platforms. These fake tracks, to put not too fine a point on it, were really bad. Her dulcimer sounded like — and these were her words — a warbled metallic mess. Her voice had been deepened and auto-tuned into what she called a bro country singer. But here’s where it gets interesting for those of us in communications, because that’s not the end of the story. It didn’t stop at impersonation.

    Whoever uploaded the fakes through a legitimate music distributor called Vydia (V-Y-D-I-A) then filed copyright claims against Campbell’s original YouTube videos — the very videos the AI had been trained on. Because YouTube doesn’t use humans to review initial copyright claims, Campbell stopped earning revenue on her own content. That revenue started going to the person who had filed the copyright claims.

    She described herself as being in a weird limbo where “I’m telling robots to take down music that robots made.” Shelly Palmer called this a reverse copyright scam, and he confirmed, speaking to other content creators off the record, that this is more common than he might have believed.

    Now, I know what you’re thinking — music streaming platforms, artists, what does this have to do with me? And the answer is everything. Because the mechanism that elbowed Murphy Campbell out of earning royalties for her own music will work just as well against any organization that publishes content on platforms with automated enforcement systems. That is virtually every organization that has a YouTube channel, a podcast feed, or any kind of public video or audio presence.

    So here’s the structural problem as Palmer frames it. The copyright system we have was built on a foundational assumption that the first entity to register a claim is the rightful owner. That assumption held when human creativity was the bottleneck. It breaks completely when AI can generate a synthetic version of any content in seconds using any voice. Think about what your organization puts out there publicly — executive speeches, earnings calls, thought leadership videos, branded audio, training content, podcasts, content marketing pieces. Every one of these is a potential training data set for someone who wants to clone your voice, your leaders’ voices, and then upload a synthetic version through a low-cost distributor. We’re talking about something that costs $25 to $90 a year. Then they file a claim against your legitimate content before a human ever reviews it.

    Neville Hobson: (pause)

    Shel Holtz: That means the system is going to see them as the first one to file that claim and assume they are the legitimate copyright holder. Now, Rolling Stone confirmed that this isn’t an isolated case. Paul Bender, Veronica Swift, Grace Mitchell — these are just a few of the artists who have faced the same attack. One musician even ran an experiment he called Operation Clown Dump, uploading fake content under his colleagues’ names across platforms. His success rate was 100%.

    So what do communicators need to do? First, audit your public content footprint. Do it now, before an incident forces you to. Know what you’ve published, where it lives, and what revenue or visibility is attached to it. Second — and here’s something that’s new for a lot of communicators — register your copyrights. Formal registration is the prerequisite for meaningful legal recourse in the United States. Third, build a rapid response protocol for platform disputes. The organizations that survived these attacks quickest were the ones who knew who to call and knew what to say. And fourth, have this conversation with your legal team today, not after something goes wrong.

    Murphy Campbell eventually got Vydia to withdraw its claims, but only after her story went viral. Most organizations won’t have that option. Your story won’t go viral. The bad actor doesn’t need to win permanently — they just need the automated system to act before you do. And that is the lesson, and it’s one we’d better learn from musicians before we have to learn it the hard way.

    Neville Hobson: Extraordinary, isn’t it, Shel? I guess you could call it a new phenomenon, only in the sense of the speed with which this can be done. I must admit, I’m astonished that the system is such that the first person to file the copyright claim is assigned ownership. Maybe that’s similar here in the UK — every jurisdiction is different, of course — but that’s rather unsettling. It obviously goes back to a time when people weren’t exploiting the system the way they are now. There are similar examples here in the UK of this kind of activity where people unwittingly find that their content is being misused and misrepresented. And although no major artists — though I may be wrong about that — I did see an article noting that YouTube allows some users to clone the voices of stars like Charli XCX and Sia, with their permission. But unauthorized AI covers of artists like Harry Styles — hundreds of thousands of copies — is a widespread phenomenon, and one that barely registers in mainstream news.

    A number of artists, a bit like your example of Murphy Campbell — there’s one I’ve heard about, Greg Rutkowski, a Polish-born artist known for his work on Dungeons & Dragons, who found his style being used in over 400,000 AI prompts, raising serious concerns about the obsolescence of human artists. And to your point about what communicators should watch out for: your corporate communication messaging that’s in audio, your CEO on an earnings call that’s been recorded and distributed. So never mind video — audio alone, at that scale of 400,000 AI prompts, is not a good situation. If you project the thinking out, this is utterly relevant to anyone publishing audio or audiovisual content online.

    I find it astonishing that some platforms, notably Spotify — which features prominently in a lot of reporting on this — are being used to literally steal someone’s intellectual property by replicating it. And I think it reinforces the point that registering copyright isn’t an idle exercise. It’s something that should be front of mind, and it does other things for you as well as the owner of the property.

    Something as simple as displaying a current copyright notice on your website — it’s remarkable how many sites I come across that still show “Copyright 2016,” never updated. Displaying a current notice signals that the business is active and its information is up to date. There are also tools to protect against AI scraping, though how effective they are is still unclear. Creative Commons licensing is another option, setting out the terms under which people can use your content — though that requires everyone to play by the rules, which frankly isn’t always the case these days.

    Nevertheless, you’ve got some protection — or at least the peace of mind that you’ve taken steps. But it really is quite extraordinary, isn’t it, Shel? When I looked into what’s happening in the UK, I came across a recent movement — over a thousand UK musicians, including Paul McCartney, Annie Lennox, and Damon Albarn — who released a silent album to protest proposed legislation that would allow AI companies to train on copyrighted material without consent. It struck me as a real head-scratcher: why would a government enable that to happen?

    Shel Holtz: Probably very effective lobbying from the AI companies, I’m sure, is behind that.

    Neville Hobson: No doubt, no doubt. But there are other things going on — organizations like the Musicians’ Union and Equity campaigning for better copyright protection, consent, and fair compensation for creators. It’s not getting much mainstream coverage, but activity is happening behind the scenes. Nevertheless, the example of Murphy Campbell and others represents a genuine threat that you need to be aware of if you’ve got content online that matters to you. Never mind the “they shouldn’t be doing this” argument — the point is, if it’s important to you, have you thought about this?

    Shel Holtz: If you think about the days before the web, copyright wasn’t something most people had to worry about that much. Professional artists with record deals had people to handle it. Same with authors — someone like Stephen King never had to worry that somebody would be the first to file a copyright claim under his name and siphon off his revenue. But now you have artists who don’t get record deals — like Murphy Campbell — publishing on YouTube and Spotify, building small followings, and making a reasonable living. This is the working class musician concept we talked about, oh, it’s got to be 15 years ago now.

    The fact is, you can use Spotify and YouTube to build a following, play some small clubs a few times a year, and make enough to pay the mortgage and put your kids through school. You’re not going to get the penthouse suite from playing to 100,000 people, but you can make a living. But this has also opened up the ability for bad actors to take advantage of that. And now with AI able to reproduce your voice and create new music at scale, all the pieces are in place for this kind of theft. Unless you’re able to get your story to go viral — as Murphy Campbell did — it’s not clear what you can do, because YouTube and Spotify have set up systems that automate this process with no human review. When you used to register with the copyright office yourself, a human was checking. So it’s not likely most organizations have revenue-generating content online — though I’m sure some do, and I’ve actually argued there are ways to use content to generate revenue.

    For example, I’ve always loved the idea of a Webcor YouTube video series called “Building for Girls,” where our employee resource group, Women of Webcor, does a five-minute lesson every two weeks on construction to get young girls interested in STEM and engineering careers. Get enough views and YouTube starts paying you. If you don’t copyright-protect that content, someone can come along, produce similar videos, claim the rights, and suddenly your revenue is going to someone else. But even if you’re not producing revenue-generating content, there are other reasons to ensure nobody else can claim ownership of what you create — especially as content marketing demands more and more output. So yes, register that copyright.

    Neville Hobson: Yeah, it made me think about watermarking for written content — though I’m not sure there’s something truly effective offering the same protection for audio and video yet. And even if there were, you’ve got situations like Murphy Campbell’s, where it’s her style and tone — the whole persona that defines her music — that’s being copied. And you don’t know about it until strange things start happening: your revenue drops, someone says “I love that new song you just published,” and you discover it wasn’t you. Or you read a review and think, wait — I didn’t write that.

    Shel Holtz: Or “I hate that new song you published” — in Murphy Campbell’s case.

    Neville Hobson: Exactly. I’m sure people are working on the technology. You’ve got digital rights management, which isn’t new, but I’m not sure it helps here because the issue isn’t copying your content outright — it’s imitating or repurposing it at scale. Hundreds of thousands, or millions of instances. I think the platforms need to do far more than they currently are. It’s a similar argument to what we’re hearing here in the UK about Meta and X doing nothing effective to protect children. This is in the same territory, and it needs a lot more from those platforms — who are making serious money throughout all of this. As to what exactly “more” looks like, I’m not entirely sure, but they need to do more.

    Shel Holtz: Yeah, and they probably won’t until there are some high-profile, visible court cases that create real reputation issues for them — then they’ll take action. The easy thing to do right now is simply register the copyright. That’s your protection. When someone imitates you, or claims the content you produced is theirs, you have legal standing to act. That’s why you need to have this conversation with your legal team.

    But I wouldn’t wait for either the platforms or the government to do anything. They’re both reticent to act. You have the ability to do something about this right now, and it’s just a matter of working with your legal team and filing those copyrights.

    Neville Hobson: Yeah, exactly. And even using Creative Commons licensing — if you’re an individual without all the formal resources, but you have a niche following, even that’s a start. Keep a record of every iteration of everything you’ve created — “I did this in 2017, here’s proof, backed up here.” That gives you something to stand on, a way to demonstrate that you can act if someone uses your content. And if you don’t do this, there’s another consequence worth considering: your original content gets buried in search results because the AI-generated imitations have somehow accrued better signals to rank higher. That kind of pollution from AI slop is its own problem.

    Shel Holtz: Yeah — and then people stop paying attention to your content altogether because they’re so fatigued by the AI slop that they tune everything out. But at least this one has a solution communicators can follow: something new to add to the copyright to-do list. And that will be a 30 for this episode of For Immediate Release.

    The post FIR #509: Does Corporate Content Need Copyright Protection? appeared first on FIR Podcast Network.

    14 April 2026, 7:26 pm
  • 20 minutes 39 seconds
    FIR #508: Inside AI’s Human Raw Material Supply Chain

    When workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies that hire gig workers are shifting from delivering food or sharing rides to creating content to train AI systems. This raises various communication and ethical issues. Neville and Shel explain what’s happening and discuss the implications in this short midweek episode.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Shel Holtz
    Hi everybody and welcome to episode number 508 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson
    And I’m Neville Hobson. Over the past few weeks, I’ve come across a set of stories that all point to something quite striking — not just how AI is evolving, but how it’s being built. Increasingly, the raw material behind AI isn’t just data scraped from the web. It’s us: our voices, our movements, our everyday lives, and increasingly, our identities. There’s a new layer of the gig economy emerging. We’ll explore this in just a minute.

    People are being paid, typically in small amounts, to record themselves walking down the street, having conversations, folding laundry, even just going about their day. That data is then used to train AI systems because those systems need examples of how people actually speak, move, and interact in the real world. In one case, delivery drivers in the US are being redirected to film tasks for robotics training. Platforms are turning existing gig workers like delivery drivers into distributed data collectors for AI. In another example, people are selling access to their phone conversations through apps that pay contributors to upload voice and text data. And in yet another, workers are strapping phones to their heads to record household chores so humanoid robots can learn how to move. The work is global, fragmented, and often invisible, with workers spanning Nigeria, India, South Africa, the US, and far beyond. Humans are no longer just users of AI — they are raw material suppliers. In China, there are even state-run centers where workers wear virtual reality headsets and exoskeletons to teach robots how to carry out everyday physical tasks. What we’re seeing is the rise of what you might call data labor, where identity itself becomes part of the work.

    There’s a clear driver behind it. AI companies are running out of high-quality training data. The open web isn’t enough anymore, and synthetic data has its limits. So the industry is turning to something else: real human lived experience. Because if you want a robot to understand how to load a dishwasher, navigate a room, or interact with objects, you need to see humans doing it at scale.

    But there’s an interesting contrast here. One of the stories highlights a 23-year-old in the US, a guy called Cale Mouser, who earns well into six figures repairing diesel engines. It’s something he’s developed great skill in doing. His work depends on judgment, experience, and problem solving in the real world — things that don’t easily translate into data. So while some people are being paid small amounts to generate data for AI systems, others like Cale Mouser are building highly valuable careers precisely because their skills can’t be reduced to it. And that contrast feels important.

    Because on one level, this new kind of work does create opportunity. For some people, especially in lower-income regions in the Global South, this is real income — paid in dollars, flexible and accessible. But there’s another side to it. Because what people are actually selling isn’t just time, it’s identity: their voice, their behavior, their presence in the world. And often once that data is handed over, it’s gone — permanently licensed, reused, repurposed, potentially in ways the individual never sees or understands.

    So you have this asymmetry: individuals earning small immediate payments while companies build long-term, highly valuable AI systems. Perhaps it’s a new version of the Mechanical Turk for the AI era. And that raises a deeper question. What does it mean when the inputs to AI are no longer abstract data, but pieces of human identity? When the training set is not just content, but behavior, voice, and presence? And when those pieces can be reused, replicated, and scaled, often without the individual’s ongoing knowledge or control? Many platforms grant royalty-free perpetual licenses, where workers get paid once and lose control forever. There’s potential for deepfakes, identity theft, and misuse without consent. And perhaps more uncomfortably, what does it mean when people are contributing to systems that could automate their future jobs?

    For communicators, this feels important because this isn’t just a technology story. It’s a story about trust, consent, transparency, and how organizations explain what they’re doing with AI. If AI ethics lives anywhere, it’s here — in how these systems are built and how that’s communicated. So the question to explore — one of the questions to explore, perhaps — is this one: Are we comfortable with an economy where identity itself is becoming labor? And if not, what responsibility do organizations and communicators have in shaping it?

    Shel Holtz
    It’s a big story with a lot to consider. On one level, it seems like the high-tech version of the sweatshops where high-end fashions were made — Nike shoes, for example — with people paying premium prices to get those products while the people making them are earning a pittance in factories with long hours and terrible working conditions. And then you add onto it the identity issue. So it’s something that I think — something at least I hope — we’re going to be talking about for a while.

    In terms of the AI element, what this suggests is that the gig economy didn’t go anywhere when AI came along; it just became the training ground for AI. And it’s interesting that the workers who are being squeezed out of knowledge jobs are selling their voices and their movements to build the systems that squeezed them out. Because where do a lot of these people who are being laid off because of AI go? Well, they go drive for Uber, they go drive for DoorDash. And you do that long enough and you get really accustomed to the idea that they send you a task, you go do that task, and you get paid for it. So if that task shifts from picking up a meal at a restaurant and delivering it to somebody’s house to going to your own house and washing your dishes because that’s what they want to capture on video — it’s the same thing. You’re getting a task on the app. You’re doing the task and you’re getting paid for it. So I think for a lot of people, this is going to be a fairly easy shift, and they’re not going to think a lot about what’s happening to the information and the content that’s being created with their movements and their voices, which is now being shared and used to make a lot of money for the people who are paying a pittance to these folks.

    So I see three issues here that connect directly to organizational communication. The first is consent and transparency — and I’m talking about inside organizations — because companies are already deploying AI tools trained on data that their own workers have supplied, and sometimes they’ve supplied this data unknowingly. The ethical and reputational questions that employees are going to ask are questions like: Was my voice used to train a bot that you activated in order to replace my friend who sat next to me and I had lunch with? And regulators are going to end up asking these questions too. So communicators really need to be out front with clear internal messaging about what data employees generate and how the company is using it. Let’s talk about that before I hit the other things that popped into my mind.

    Neville Hobson
    Yeah. I mean, the transparency element is key. That’s not new — that’s always been the case. But how organizations should communicate this may not be as simple as it might seem. I mean, the example you mentioned is an interesting one: a company uses data from its employees without them knowing. Well, let’s say — don’t do it like that. Don’t do that. You need to disclose if you’re doing this. Surely that is an ethical issue: if you don’t tell them and you go ahead and do that, that’s not what you should be doing. So there’s an easy one to address.

    The other element, which is also ethics-related, is: is this whole thing ethical if participation is driven by economic necessity? Whatever reason you might give — we need to get an edge on the competition, whatever — you’re still up against that element.

    That’s the big-picture ethics question. But common sense tells you how you should do this. Should individuals be compensated long-term for use of their data? On the one hand, you might say, fine, let’s tell everyone: your data may be used — your day-to-day interactions with colleagues, the recordings of your conversations on our internal Teams tool — that’s kept. So the employee might say, I’m okay with that, but I want to be compensated for it. And now there’s an interesting position.

    Shel Holtz
    You mean like as if they’re licensing it?

    Neville Hobson
    Exactly. And the organization might retort —

    Shel Holtz
    Well, the organization might retort: you are being paid for it. You’re being paid a salary. You come in here every day, you do your work. Read your employment agreement. I mean, this is kind of like — what was it? Velcro or Post-it notes? Maybe both — where the person who invented it never made a penny off the royalties because they were an employee of the company and the work product belonged to the company. I think organizations might be able to make the same argument here.

    Neville Hobson
    They could. But they’re not sure whether they should, just because they could — because the climate is very different today from those examples back in the 1960s. So you’ve got to think about things like: if we don’t do this right, are we going to get an exodus of employees who are going to go work for a company that treats them better in this same context?

    Shel Holtz
    Well, now you have the economic environment and the hiring situation where a lot of companies are trying to avoid hiring. They’re also trying to avoid layoffs, but they’re trying to avoid hiring. It’s pretty flat out there right now — it’s definitely a buyer’s market. So I don’t know that I would leave an organization because they’re using my data unless I already had another job lined up, because they’re hard to find right now.

    Neville Hobson
    I agree. It’s slightly a hypothetical scenario, but I think it is worth recognizing that it could well come to that. From the research on those articles — and some other things I saw — there’s already a strong imbalance of value and control between the individuals who provide the data and the companies who are getting that data and making economic use of it. AI companies rely on real-world human data because of data scarcity. So there’s a challenge on both sides of the argument: they need the data, but there’s probably a finite amount that employees can provide, so they have to look elsewhere too.

    And the thing is, a new economy is emerging where people monetize their identity and behavior voluntarily. In the case of the examples we heard about — the guy in Uganda filming himself walking down the street — and then the flip of that, as I mentioned, the example of young people in America, which the Guardian has a really good analysis of, who have skills that cannot easily be translated into something AI can do. The key element in that part of the discussion was about the skill this young guy has — 23 years old. It’s not unique, but he’s got a skill that isn’t just “I know how to repair a diesel engine.” It’s that he can, at a glance, literally see what’s wrong and already formulate the six things he needs to do to fix it. And that is valuable. He’s earning $150,000 a year already in salary doing this, and he’s 23 years old.

    So there are other examples mentioned in that Guardian piece too that are interesting. On the one hand, you’ve got gig economy workers like DoorDash drivers doing what they’re doing. On the other hand, you’ve got people like this guy developing a career not related to AI at all — a skill that cannot easily be replicated by AI. So that’s part of the landscape. I’m not sure where all of that fits within this, Shel, to be honest, but it’s part of the picture.

    Shel Holtz
    Yeah. I think it was MIT that came out with a report not too long ago saying something like 93% of jobs are AI-safe — and there were a lot of people saying this really paints a different picture from what we’ve been anticipating. I don’t know how accurate it is. But in the meantime, there are AI companies working very hard to elevate these systems to the point where they can do some of the work that currently might be considered AI-safe. I think for many jobs, it’s probably just a temporary designation.

    I raised the issue of employees inside the organization. Those gig workers are another issue for organizational communicators, because these workers — the ones very accustomed to having the app tell them to do a task, doing the task, and getting paid for it — these folks aren’t covered by traditional internal communications. Organizations relying on gig workers and contracted labor, and increasingly if your AI tools were trained by them, have a stakeholder relationship they may not have a communication strategy for. I’d argue they don’t have a communication strategy for it.

    I’ve often made the distinction between internal communications and employee communications. Employees are the people who come in and get paid by you directly, whether salaried or hourly. But you have other internal stakeholders, and we develop strategies for them — the contractors embedded in our organization. I work in construction; we have subcontractors; there are ways the organization communicates with them. There are all kinds of internal stakeholders, and these gig and contract workers are now among them. We should figure out a way to communicate with them, talk about our ethical use of their data, and engage with them in ways that are meaningful, useful, and produce positive results.

    Neville Hobson
    Yeah, makes sense. You had a couple of other points you were going to mention. What’s the next one?

    Shel Holtz
    Just one other, actually, and that’s about keeping the human in the loop. A lot of companies, in order to feel good and look good as they move into the AI world, are positioning human oversight as really important. But what the stories we’ve been talking about reveal is that humans are raw material — physically, biometrically, behaviorally. Workers aged 22 to 25 in the most AI-exposed occupations — things like paralegal work, for example — have experienced a 13% decline in employment since 2022, which is the year OpenAI released ChatGPT to the public. On the other hand, employment for less exposed or more experienced workers — think about your 23-year-old diesel mechanic — has been steady or in some cases even increasing.

    So organizational communicators talking about AI as just augmenting human workers need to be careful, because I think increasingly we’re going to hear stories about how that isn’t actually true, particularly for this younger demographic. We have to be honest about that asymmetry. I mean, whose labor is augmenting whom?

    Neville Hobson
    Yeah, I get that. It does make sense. It’s an issue that embraces communications, ethics, and trust more than anything. But at the heart of it, there is the technology aspect. I’m thinking about other things that you and I have discussed in previous episodes that are kind of adjacent to this issue — where if you analyze what the real issues are, they tend to be a mixture of communication, ethics, and trust. So that’s a good starting point for communicators who might be wondering how the hell to address this: communication, ethics, and trust. Work out how you can develop the procedures that embrace and recognize the importance of those things and execute them inside the organization.

    I agree with the premise in all the articles we’ve linked in the show notes that a new data labor economy is emerging where people monetize their identity and behavior and, in the case of the Global South in particular, don’t think twice about it. Employers have a duty of care to recognize what they need to do to bring that group into their structure — one where communication, ethics, and trust play the bigger role.

    Shel Holtz
    Yeah, absolutely. And I think there are a number of places to look. You don’t want to be the next organization to have it disclosed that you have exploited labor producing the data that you need, because those scandals were pretty difficult for the fashion companies that went through them. Also, one of the things that generative AI models are really good for is scenario planning.

    Neville Hobson
    Ha!

    Shel Holtz
    And for your organization, in your industry, with your markets, it wouldn’t hurt to do some scenario planning about who the stakeholders are that you should be communicating with, and what the challenges are going to be both internally and externally, and start developing some communication strategies. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #508: Inside AI’s Human Raw Material Supply Chain appeared first on FIR Podcast Network.

    8 April 2026, 8:16 pm
  • 25 minutes 37 seconds
    FIR #507: Should Nobody Really Ever Write with AI?

    Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson.

    Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write.

    Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports.

    Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before.

    If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional.

    But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand.

    It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility.

    So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference.

    Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have.

    Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues.

    I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing.

    And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis,

    Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring.

    Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly.

    I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content.

    She and her co-author, Katia Walsh, talked about the way in which they divvied up the work. And the AIs, plural, did research amongst other tasks, too. But Josh did a lengthy post setting out all the areas where they found AI useful and AI not so useful. And it struck me reading Josh’s post and then also Charlene’s postscripts, as it were, in the book itself, which I am reading, by the way, that this would apply to anyone writing, not just would-be book authors, in my view. Whether you’re writing fiction or nonfiction doesn’t make any difference. Whether you’re writing a report, whether you’re writing an article or for a blog or for a newspaper, whatever, doesn’t matter. These principles, I think, apply to that. And it’s not so much about whether your role in your organization or in your job is to do with this and you’re not very good at writing. It’s not so much that. It’s more focused on those whose job is writing, or writing is part of their job in some form.

    So there are a number of things that I took from it. But to go to the main point about Charlene’s book Winning with AI, AI wasn’t doing the writing, as I mentioned. It was supporting the thinking. It handled things like the research, summaries, the structure, which speeds everything up. But the ideas, the voice, and the judgment — that all stayed firmly human. And to quote from Josh’s post, he says that the two authors describe how they used Claude to structure the content, ChatGPT to create a custom GPT with four years of their work, which it used in a sense as a training aid, Perplexity to do the research, and Gemini to search a vast collection of interview transcripts. It’s much more detailed than that. It’s well set out in the book. And I thought, that’s interesting. That’s a very intelligent way to go about using different AI chatbots for different purposes on your projects.

    So three things I took from this, and this applies to all the points you made, Shel, and it will repeat some of those, but it just shows you that this is how you need to think of this. First, AI works best as a thinking partner, not a writer. Like I said, the two authors used AI as a note taker, researcher, brainstorming partner — essentially a third collaborator. It helped them structure the ideas, surface insights, and challenge assumptions, and they did not rely on it to produce the final prose.

    The second point: it saved time on the drudge work, as Josh called it, but it requires human judgment. It was highly effective for research and summarization, structuring outlines, surfacing missed ideas from earlier drafts. That resonated with me because I often find in my own experience when I’m doing research on either blog posts or articles or reports or just research about something I’m interested in, it usually surfaces something AI wouldn’t have thought of, or I might have done, but it might have surfaced later after I’d written it, and it requires a rewrite or something like that. Structuring the outlines, too, is another thing. And this is definitely worth noting — we’ve discussed this before. Everything still required the humans to fact-check and validate everything the AI produces, because in Charlene’s words, AI has no built-in truth function. And I think that’s a worthwhile way of looking at it.

    And the final point that I took from this: you can’t outsource originality, voice, or quality — i.e., the writing. They tried it. AI failed at core creative tasks. There are three of them that Josh points out in his article. Generating genuinely new ideas — this is not very good at this, because it’s trained on existing writing that humans have done over the years and the centuries even. It can’t create something new from that other than guesswork. It’s about the same as what we do, I think, except we’re likely to do the more informed approach. It can’t write in a compelling human voice. And it cannot edit to a high standard. They all described — Charlene and Katia and Josh, for that matter — AI writing as bland, repetitive, and jargon-heavy. And in fact, Charlene talks about how they could not stop jargon creep in anything that the AI produced. And she had this big thing about one draft where they used AI to review it — it changed every use of the word “use” to “utilize.” The AI changed it to that, full of that kind of jargon.

    Shel: One of my biggest pet peeves, by the way, is “utilize.”

    Neville: Right, totally. And the final quality, nuance, personality, and insight remained entirely human because the humans wrote it. So I take all of that, add it to what you’ve been talking about, and say, I guess I’d conclude from that: it doesn’t matter what your role is. These are the principles you need to pay attention to and approach your use of AI as an aid. And we’re not, you know, suddenly coming out with a revelation here. I see people saying this all over the place. AI is an aid to help you, in a sense, create extremely good content, either as a writer or something else that you might be doing, where this is contributing to that end. And it doesn’t matter what your role is, whether you’re no good at this or that — that reporter you talked about likes to report but not to write. I’m wondering how the hell he gets away with doing that. Reporters have to write, don’t they?

    Shel: Well, I’m sure he just poured a lot of effort and energy into it when he would have rather been out in the field gathering information.

    Neville: Got it, got it. So yeah, this is not too difficult a thing to kind of grasp, in my view, yet I’m constantly bemused by the fact that I see — and maybe LinkedIn’s not the best place to look for this stuff — but I see it all the time. You and I were talking about this before we started recording about people posting there about, you know, you should never use AI. Here’s a list of words I see, and if I see them in LinkedIn posts, I’m going to unfollow that person and call them out. I see this all the time. And I think your example you mentioned to me about the person who wrote a LinkedIn post saying that you should — it was like, you should never, ever — and there’s the list of things — use AI for. That’s insane. That’s insane.

    Shel: Yeah, she said nobody wants to read emails written by AI. Nobody wants to read reports written by AI. And she just went down every form of writing you can think of. And I was thinking, really? Nobody? Nobody wants to read this? And I’ve got data that says people prefer emails written by AI when they’re written by people who are terrible writers and have a hard time expressing the main point they’re trying to get to. Their own writing — the AI has actually made the emails of these people better, and people would rather read those.

    Neville: So did you use AI to research this?

    Shel: To research, to find that data? Yeah, of course I did. It’s easier than using Google, but I also verified the source of that research.

    Neville: Right, okay. No, no, no, hang on a second. The point of that though is it’s illustrative of something that I’m astonished when I hear people that have not heard of doing this before. “That’s a good idea,” which is: anything you’re working on, literally anything, and you either have your list of things you need to research, but something that occurs to you during your work — I wonder who said X, or I wonder how you do this — ask your AI to go research it. And it then becomes a natural part of your workflow. And that’s one of the things it’s very good at.

    But we’ve got the example we talked about last October with Deloitte in Australia and Canada. You’ve got to check everything it creates, particularly if it’s a topic you really don’t know about yet. But even if you do know, you’ve still got to check it. That means when you tell it to go out and look for stuff, and you’ve already given it your preferences — like anything it finds, it’s going to come back with a link to the source as well — so you’ve got all that stuff, you’ve got to then go and check all those things too. So there are no easy shortcuts here to this use. But it still saves you a huge amount of time because you’re then spending time, in a sense, understanding the output that you’re going to use to create your final version of this.

    That I see people often criticizing — “If you use AI, your brain gets kind of frozen and doesn’t learn stuff.” Yeah, that’s not, in my experience, the case, because you’re doing it differently is how I would see it. You are asking your assistant to go and find this and this and this, and they come back with this and this and this, and you then go and research it yourself to check up that it is this, this, and this and not that.

    So it’s, I think, an interesting aspect to the broader debate on those who are anti and those who aren’t, where most of us are sort of somewhere in the middle there. But you need to totally understand the pros and the cons of this and indeed the limitations of AI, as well as the human limitations, and work out what works best for you.

    The reality, though — I guess the bottom line in terms of how I see this — is that you cannot take the human being out of the picture. This tool is purely that: something to assist you that gives you what you need to create the final product, if you like. And that doesn’t matter your job role. That’s what it’s about.

    Shel: Well, I would argue that if you are in a job where writing was not taught in school beyond what you learned in your basic English class or whatever language you were raised with, and you need to produce writing, and this tool is now there to help you do that — if you’re an engineer, for example, engineers are brilliant. Many of them are

    Neville: Not good writers.

    Shel: Terrible writers. And they have to produce something that’s going to be useful to the people that they’re distributing it to. And if AI is going to write a better draft than they could do on their own and produce better output that people can make better use of, then they should let AI write that stuff. In an engineer’s report, there is no need for lived human experience that we keep hearing about. Empathy does not have to come into these reports. They’re technical in nature. Let the AI write it for them. Absolutely edit it, review all the facts to make sure it’s right. Presumably it’s writing based on what you gave it in terms of the information that you have learned that you need to produce in this report. So less opportunity for hallucination when you’re telling it: only use this data that I have put into this ChatGPT project for the output. But you still have to review it very, very carefully. That’ll still save you time and grief if you’re not a writer and you need to produce this stuff. I feel really strongly: we have this great tool here that’s going to make the outputs better and make business better.

    Neville: Yeah, I think I don’t disagree with you at all, but I think I’m not as optimistic about it as you are in the sense of this is going to work seamlessly if people do all the things you just said, because typically they’re not going to do that. I think the key — and I can see scenarios exactly as you’ve outlined, someone in a job that’s a valuable job and he or she does a great job but lacks the skills to write — then I would say that’s fine, get the AI to write. You need to be educated then on how to get the AI to do what you want. You then need to, without fail, verify and check every single thing that the AI has created. And I’m not sure that many of the folks that you might think of are truly geared up to do that kind of thing. So you might need to have colleagues assist you then. I mean, I guess the point is that…

    Shel: Well, it’s…

    Neville: This is going to be a debating point forever, I would imagine, until people stop talking about it. But you’re going to encounter — I can see it now — “But yeah, you’ve got to disclose the fact that you used AI.” No, you don’t. You get down to that rabbit hole argument about, do you do that when you use Grammarly? Do you do that with your spell checker? No, you don’t. So why would you say you’d have to do this? Because it’s such an emotive topic where logic is missing in many of the arguments. It’s all emotional.

    That’s the minefield you have to walk. For much of the work that many people might do, they won’t use the AI to write it. They’ll use AI to assist them in creating it. And that could mean they do an outline, or it suggests the construct of a draft, or you draft it and it reviews it and makes suggestions on how to improve it.

    I do that quite a bit with my AI assistants. And I don’t have a rigid format. Much depends on the topic and how I feel about it, basically. And often I’ll ask it a topic that is something I’ve been thinking about and say, is this worth writing about? If so, give me some suggestions on the angle I should approach it from. And that always sparks much more discussion and thought on what the content might be, including, “Now this is not worth writing about for me.”

    So it’s a big topic. You had in your prep for this loads of links to articles all over the place about this. And I think it’s good to do that. But this is emotive. And it’s going to not be a simple thing to avoid criticisms.

    Shel: Yeah, and I think it’s a governance issue inside organizations. I hear about the lack of AI training going on in many organizations or how superficial it is. I think for those people who have to write in their jobs, you want to do targeted training about how to use this to write. From the idea generation to the brainstorming to the back-and-forth discussions that you might have about approaches to take, or

    Shel: using it to structure the document right down to writing it for that first draft, if you just could do better with that than you can on your own and you’re not a professional writer. All of that needs to be trained and it needs to be articulated in the governance policies in the organization around AI, and there need to be resources. And yeah, we need to have subject matter experts that people can call. This is on us right now as internal communicators who deal with writing in general to lead this conversation in the organization and make sure that these kinds of governance activities are implemented.

    Neville: Work to do.

    Shel: And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #507: Should Nobody Really Ever Write with AI? appeared first on FIR Podcast Network.

    30 March 2026, 7:01 am
  • 1 hour 42 minutes
    FIR #506: Battle of the Bots!

    In this monthly long-form episode for March, Neville and Shel tackle a trio of interconnected themes reshaping the communications profession in the age of AI. The conversation opens with Anthropic’s top lawyer declaring that AI will destroy the billable hour. That thread leads naturally into JP Morgan’s controversial use of digital monitoring to verify junior bankers’ working hours, where Shel and Neville question whether surveillance technology can substitute for genuine managerial trust and engagement.

    The episode also examines Gartner’s widely circulated prediction that PR budgets will double by 2027 as AI search engines favor earned media. Shel delivers a detailed report on the escalating misinformation crisis, citing a 900% surge in global deepfake incidents and new research from the C2PA on content provenance standards. The episode closes with a discussion of Cloudflare CEO Matthew Prince’s prediction that bot traffic will exceed human traffic by 2027, and a sobering peer-reviewed study on how social bots hijack organizational messaging — research reported by Bob Pickard, who has experienced bot-driven attacks firsthand.

    Dan York also contributes a tech report on the state of the Fediverse and Mastodon, as well as on AI developments for WordPress.

    Links from this episode:

    Links from Dan York’s Tech Report:

    The next monthly, long-form episode of FIR will drop on Monday, April 27.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript

    Neville:  Hi everyone, and welcome to the Forum Immediate Release podcast, long form episode for March, 2026. I’m Neville Hobson.

    Shel: And I’m Shel Holtz.

    Neville: As ever, we have six great stories to discuss and share with you, and we hope you’ll gain insight and enjoyment from our discussion. Perhaps you’ll want to share a comment with us once you’ve had a listen. We’d like that.

    Our topics this month range from AI in the end of the billable hour to Gartner’s predictions about PR budgets to monitoring work in the age of AI to newsrooms battling AI generated misinformation and more, including Dan York’s tech reports. Before we get into our discussion, let’s begin with a recap of the episodes we’ve published over the past month and some list of comments in the long form.

    In episode 502 for February, published on the 23rd of that month, we explored how rapidly accelerating technology is reshaping the communication profession from autonomous agents with attitudes to the evolving ROI of podcasting. We led with a chilling milestone moment, an autonomous AI coding agent that publicly shamed a human developer after he rejected its code contribution.

    A leader can build goodwill for days and lose it in seconds. In FIR 503 on the 2nd of March, we reported on the president of the IOC, that’s the International Olympic Committee, who had no answers to reporters’ questions and suggested on camera that someone on her communications team should be fired. We got comment on this, haven’t we, Shel?

    Shel: Boy, do we have comments on this one. This attracted a good number of them, starting with Kevin Anselmo, who used to have a podcast on the FIR Podcast Network. It was on higher education communication. He says, having previously worked in communications for two different international sport federations, I found this story quite amusing. One of my first PR roles was working at the 2000 Sydney Olympic Games. I was working on the sport federation side, not the IOC.

    Neville: Yep, you did.

    Shel: But I know that working at such events is exhilarating and exhausting as you have to deal with a myriad of different issues. I can imagine that toward the end of the Olympics, the PR team fell short of delivering a robust brief. But nevertheless, in answer to your question, even if the PR people were abysmal, the fault is on Coventry for the way she handled the situation. A simple, we will have to look into this and get back to you response would have worked.

    Instead, by handling it the way she did, she drew unnecessary attention to the questions she and the team weren’t prepared to answer, as you and Neville shared. I guess in the process of this mishap, I learned that Germany was in the running for the 2036 Olympics, which I wasn’t aware of. We also heard from Monique Zitnick, who said, really enjoyed your discussion on this. Certainly a puzzling situation that has surely ended in broken trust on both sides.

    Shel: Mike Klein said, another ignominious IOC leader in the mold of Brundage and Samaranch. Neville, you replied. You said that’s an interesting comparison. Mike, Avery Brundage and Juan Antonio Samaranch both left very complicated legacies, particularly around politics and governance in the Olympic movement. What struck me about this episode wasn’t so much ideology or policy. It was leadership under pressure.

    Coventry had actually received a fair amount of praise for how she handled some difficult moments during the games, which makes the press conference moment even more interesting from a communication perspective. It’s a reminder that reputation capital can be fragile. A single public moment can reshape the narrative very quickly. Mike replied, yes, leadership under pressure, but also the kind of people the IOC has chosen for leadership over the years.

    Coventry has a complicated history over her involvement with her native Zimbabwe’s recent regimes as well. Sylvia Camby said, Neville, watching Coventry’s press conference took me back to the time I spent doing comms for an international association. It reminded me of how inward-looking organizations like the IOC can be. So totally focused on their internal member politics with leaders too lazy or too overconfident to bother to educate themselves about current affairs.

    Also, they often have a distorted idea of what the press is interested in. They often think they can dictate their agenda. As you and Shel mentioned on the podcast, the questions were entirely predictable. You replied, Neville, that’s a really insightful observation, Sylvia. Organizations like the IOC can become quite inward facing, particularly when so much of their energy is spent navigating internal governance and member politics. That can create a kind of blind spot about how issues look from the outside.

    Sylvia said, and I was thinking, I’m proud of Germany for being so sensitive about the significance of that date and for opposing the 2036 bid. They are much better at reading the spirit of the time than Coventry. As an aside, my father’s cousin competed in the 1936 Olympics in Berlin as a gymnast. She passed away last year at the age of 104.

    She often spoke to me of the atmosphere surrounding the Olympics at the time, a heaviness and a sense of unspeakable doom. So yes, 2036 is a date that Berlin should definitely avoid. And you replied to that, Neville. People can go find that one in the comments.

    Neville: That’s a good one. There are some great points of view, perspectives there. So thanks to everyone who commented. Are companies using AI as a convenient explanation for layoffs? That was a question we asked in FIR 504 on the 10th of March when we discussed AI washing, when organizations blame workforce cuts on AI, even when the reality is more complicated. It’s a difficult ethical space for communicators. And we have comment on this too, don’t we?

    Shel: Three short ones. First from Monique, who commented that she was looking forward to listening to the episode because she’s been having a lot of conversations on this over the last month. Jacqueline Trzezinski said, I’m glad you’re delving into this. The same thought came to my mind when I saw the Block layoff announcement, especially as it was held up by some on LinkedIn as an example of how valuable transparency is during layoffs.

    And Jesper Anderson said, I find it fascinating how quickly the world turns upside down. 18 to 24 months ago, companies were accused of letting people go because of AI and not admitting that this was the true reason.

    Neville: Good perspective, Jesper, that one. Is social media still social? In FIR 505 on the 17th of March, we explored Hootsuite’s 2026 Social Media Trends Report, addressing social search, AI versus authenticity and more. Plus a darker question: what if AI starts to dominate the conversation? And we have comment, don’t we?

    Shel: Yes, from Zara Ramoutoho Akbar, and I sure hope I pronounced that right, apologies if I didn’t. She said, yes, it feels like socials are shifting from a channel to a trust system. And in that world, I would say that the employee and peer voices matter more than brand output. Are you seeing organizations lean into that yet or still treating social as a broadcast channel? And since Zara asked the question, Neville, what do you think? Are you seeing this change?

    Neville: No, I’m not, to be honest, but maybe it’s taking its time. There is something afoot without any doubt. And I think it’s something that we should expect. And that darker question is a valid one to put forward, let’s say. And we’ll keep our eyes and ears open, I think.

    Shel: Yeah, I haven’t seen it much either, but I do think that there are organizations that are talking about it. So as you say, we may see this start to change in the months ahead. We have one more comment from Dolores Holtz. No relation. I for one certainly rely on people whom I trust more than any name or brand.

    Neville: Yeah, I agree. Fair enough.

    Shel: I think that covers our previous episodes up to this one.

    Neville: Yeah, good, good comments all over from all those episodes. And thanks everyone for listening and adding your comments to that conversation. It’s really terrific.

    Shel: Yeah, keep those coming and ask us questions because that was great from Zara. Also up on the FIR Podcast Network right now is the latest Circle of Fellows. It was a good conversation on the communication issues and challenges in this age of grievance and isolation into basically tribes these days.

    Shel: This was Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Wah were the panelists on this Circle of Fellows. As I say, it was really a terrific conversation. The next one is coming up on March 26th, Thursday at noon Eastern time. It’s on crisis communications and especially this idea of the polycrisis, which we heard about from our friend, Philippe Borremans.

    The panelists for that Circle of Fellows will be Ned Lundquist, Robin McCaslin, George McGrath, and Carolyn Sapriel. Should be a good crisis-focused conversation. And of course, if you can’t make it at noon on Thursday, it will be available as a Circle of Fellows podcast and the video will be up on the FIR Podcast Network.

    Neville: While we’re talking about IABC, let me briefly mention that Sylvia Camby and I hosted a webinar for IABC as part of IABC Ethics Month in February about ethics and AI. We’re actually going to…

    Shel: I attended and it was terrific. I was there. It was a great webinar.

    Neville: Well, thanks, Shel. That’s great. And we’ve actually had a nice review from someone, which was very pleasing. We’re going to repeat this, specifically for IABC members in the Asia-Pacific region. So if you’re in Australia, India, China, Japan, and maybe right out into the Pacific area, this one’s for you. It’s members only.

    The event is AI Ethics and the Responsibility of Communicators. It explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. It’s on Wednesday, the 15th of April at 6 PM Sydney time. That’s AEST, as I discovered, Australian Eastern Standard Time. You’re no longer on daylight savings in Australia, whereas we are by the time we do this. So 6 PM in Sydney, or 8 AM UTC. That’s Coordinated Universal Time or GMT if you’re used to that one. For me, I’m in the UK, so it translates into 9 AM UK time. But 6 PM in Sydney and that sort of time zone area is the important bit. So we look forward to seeing you there.

    Shel: 1 AM Pacific time, so I won’t be participating in this one.

    Neville: If you’re up, you could join. OK. So IABC will be letting members know about where to go and register, et cetera, I’m sure in the coming days. So just mark your diary in the meantime. Wednesday, 15th of April, 6 PM Sydney time. And let’s get on with things. But first, there’s this.

    Shel: I won’t be.

    Neville: Right, let’s start with a statement that will make a lot of people in professional services sit up a bit. Anthropic’s top lawyer Jeff Blick says AI is going to destroy the billable hour. That’s of interest to you if you’re a consultant in particular. Blick argues that AI is removing the need for what he calls tedious but lucrative work, the kind of work that firms have historically billed by the hour. And that matters because the billable hour isn’t just a pricing model.

    It’s the foundation of how entire professions have operated for decades. But here’s the tension he highlights. Clients want problems solved quickly and efficiently, while the billable hour rewards the opposite: more time, more revenue. AI sharpens that contradiction because now tasks that once took days or weeks can be done in minutes. And that raises a very simple, very uncomfortable question for clients: if the work takes less time, why am I still paying for all those hours?

    It’s something I’ve been thinking about quite a lot myself recently. I wrote about this in Strategic Magazine a few months ago, where I argued that AI isn’t killing consultants, but it’s killing the logic of the billable hour. Because the model has always had flaws: it rewards activity over impact. It prices effort rather than outcomes. And as soon as technology compresses effort, the model starts to look outdated. What’s changing now is not just efficiency, it’s expectations.

    Clients aren’t necessarily looking to pay less. They’re looking for clarity, predictability, and above all, value that reflects results, not time spent. So we’re starting to see a shift from billing hours to pricing outcomes, from selling labor to selling judgment. And that sounds straightforward, but it opens up some deeper questions. If AI removes the entry-level repetitive work, how do people develop the judgment that clients are now paying for?

    If you move away from time-based billing, how do you actually define and defend value? And perhaps most importantly, are firms really ready to let go of a model that has defined their economics for generations? I think what this really points to is a shift in what clients are buying: not time, but judgment; not effort, but outcomes. And the firms that recognize that early will have a very different advantage from those that don’t.

    Shel: Well, if AI drives the end of the billable hour, all I will be able to say is it’s about time and thank God something did it. I have never been a fan of billable hours in communication consulting. I can see it in other lines of work. I mean, plumbers bill by the hour, electricians, people who work with their hands tend to bill by the hour, although interestingly, auto mechanics often do not. It’s the labor required to do this particular thing is worth this amount of money. And then there are the parts that you have to pay for.

    But the question is, if the model of billable hours goes away in the public relations and communication industry, what do we replace it with? And I know we have talked about this in the past, but it has been a while.

    But I remember when I worked — we have both operated in the billable hour environment. And when I was at Mercer, Mark Schuman was also at Mercer. I think he was in their Houston office and he came up and met with the comms consultants in Los Angeles. And he was talking about the value add. And I objected to this. I said, I have a billable hour based on my value and what it takes to cover overhead and make a profit. I think my billable hour when I left Alexander and Alexander was something like $385 an hour. And that should cover everything. Why are we adding something and just calling it value add?

    And what Mark said was, if I have an idea in the shower and it took me 30 seconds for that idea to spark and yet it informs the entire engagement with the client and solves a problem and is based on my decades of experience and everything that I have learned — is that really worth only the 15 cents that that 30 seconds would be valued at under the billable rate? That’s ridiculous. The more I thought about it, the more I thought he’s right. That is ridiculous. So why aren’t we billing based on the value of the project?

    Now, you can say here’s how many hours it’s going to take to complete that project and use that as a basis to come up with a price to give a client. Or you can look at other things. I think I mentioned on a show several years ago that Craig Jolly and I proposed a communications program for Coca-Cola for a department that was eliminated before we could come to a final agreement because they had actually agreed to this.

    And what we were going to be paid for our effort was absolutely nothing. We were not going to bill them for hours. We were not going to bill them for the value of the project, but they were going to track the outcomes of the work that we did. And they were going to pay us 5% of the savings that accrued as a result of what we did and 5% of the profits that accrued based on what we did. And we had a formula for that. We would have made a fortune over, I think, the three years that we were going to get compensated after this project was complete.

    There are other models out there that people can consider, but you’re right. I’m wondering when the clients are going to start saying, this is what I paid last time. Haven’t you started using AI? Why isn’t the drudge work that is part of this project taking less time and costing me less? I think we’re going to hear that from clients. So you better start thinking about the new models.

    Neville: Yeah, it’s a sea change. It’s quite a significant change in structure to move from the billable hour. And one reason I believe nothing’s happened is there is definitely no groundswell of desire to change this from the people in organizations who would likely suffer most if it did change, or those who don’t.

    And there are lots. I’m not picking out anyone in particular, but there are lots of people who just don’t like change. And we’ve been doing this for years. It works. Our whole business is based on this. And it probably is going to need, going to take a major client of a major consulting firm to say, hang on a second. We have a question for you about how you’re charging us. I’ve seen lots of chats about this, Shel, and I’m sure you have. And yet nothing’s happened.

    So I wrote a lengthy analysis on my own blog not long ago, and that hardly got any attention at all. The story in Strategic I wrote was quite heavily researched, but I’ve not really seen much, any real traction on that other than some folks who said to me, hey, nice article you wrote in Strategic. I’d rather hear them say, I didn’t like it, here’s why, or I got a better idea, or whatever. Get a conversation going about it.

    One thing I think should stimulate a discussion is, and this could be something we’ve got to force on people: look at it from the point of view of the client, not the consultant. And by the way, all these other examples you gave, like plumbers and all that, are absolutely right. So this discussion is specifically about professional services and consulting, not auto mechanics and plumbers and stuff like that.

    So think about this: clients aren’t buying less, they’re buying differently. That’s the thing. I’ve had conversations with people — I have to admit, I struggled, truly seriously struggled to get the conversation actually with some energy to continue on why we should make this kind of change. So clients aren’t buying less, they’re buying differently. And one thing I wrote in the Strategic piece was talking about what their expectations are from the people that advise them, the consultants they work with. Today, they expect advisors who: one, use AI to scan signals and surface insights; bring sharper data-informed recommendations; and help avoid ethical, legal, and reputation missteps. Three major things they expect from people. AI has a role in all of them.

    I think we need to move away and we can take the initiative on this to change the conversation with clients to this as opposed to, well, draft that report for the clients and AI can do all the research and so forth. When clients ask, why am I paying for all this time? You could pitch that to them in the sense that this is the value of the briefing we give to the AI. I think that is a demolishable argument over time. Clients are like you and me, they’re people, they’re not stupid. They’re looking at this themselves, many of them.

    That said, there are many clients, particularly the more you get to the enterprise level and those kind of consulting firms at that level, who really don’t have much desire to rock the boat at all with all of this. It’s very entrenched, it’s ingrained. Everyone’s making money and it’s all wonderful and business gets done. And it’s going to need something to make a major shift here.

    So I think we should take the initiative as communicators to do this. And it could be someone in a consulting firm — like you, I worked for Mercer and I remember back in the early ’90s, not discussions about changing the business model, but the value add. So maybe this is a Mercer thing at that time, perhaps. We need to have that conversation now. And we need someone at a senior level with an influential voice to raise this internally in their organization and run some internal webinars or seminars or get-togethers to talk about why we need to change the business model and why the billable hour has to end as the basis for business. But it’s a big task, I would say.

    Shel: One of the truths about the public relations industry is that it takes pain for the industry to change. I mean, we’ve seen this. We’ve been doing this show for 21 years and we’ve seen it with a number of major technologies that have come along that the PR industry has been very, very, very slow to adopt. And what ultimately got them to adopt the web and social media was seeing work taken away from them by boutiques who were offering those services. And as soon as they saw money left on the table, they said, we’d better figure this out because this is something that we should be doing. They figured it out and now they’re using it regularly.

    You’re absolutely right that we in the industry have experience and insights that allow us to do things like create the appropriate prompt to get the right result for a public relations issue or campaign or what have you. And it goes far beyond the prompt. It goes into creating documents that become foundational to a project within one of the LLMs. It even gets into agents now. What if we set up an agent on behalf of a client that is out there looking for competitive information on a regular basis? And it took, let’s say, 15 hours to create this agent so that it was producing the kind of daily or hourly reports that we’re looking for. And those become a big part of the project. It’s operating while we sleep. We can’t charge for that. Certainly it’s not going to be on an hourly basis.

    So a formula has to emerge for these types of things that allows agencies to be compensated in a way that keeps the lights on, provides the salaries to the consultants who work there, and earns a reasonable profit without having to bill hours because it just makes less and less sense. And as I say, I didn’t think it made sense back in the ’80s when I was working for Mercer, my first consulting gig.

    You remember maintaining your time sheet in 10-minute increments? Oh my God. Who’s going to pay me for that? Who do I bill for the time that I spend maintaining a time sheet in 10-minute increments? I mean, come on.

    Neville: Don’t remind me, please. I tried to get away with entering time in the timesheet for the time I had to spend on doing the timesheet. They didn’t let me get away with that. No.

    Shel: They didn’t buy that. My brother’s an attorney, and when he was working for a law firm — he’s corporate side now — but he remembered if he took a pencil out of the supply cabinet, he had to bill that to a client. So I mean, the time that he was spending billing things to clients was time that he wasn’t spending on client work. There are countless reasons why the billable hour needs to die. I don’t mind the consultant having a billable hour rate as a base for calculating something, but it shouldn’t be the be-all and end-all of what the client is billed. There needs to be a formula where you say this is what the project is going to cost. And if the project moves out of the scope that you agreed to, then you go back to the client and say, we’re outside the scope. We’re going to have to charge more for that. Here’s what we’re going to charge. You okay with that before we start moving on this stuff that you’ve requested that is out of scope?

    Neville: Yeah, no, we need to get some movement going on this topic, I think. And maybe that’s something — thinking about IABC, you know, some kind of talk on this topic needs to happen.

    Shel: Yeah. Or, you know how Ann Handley sold the T-shirt that said Justice for the Em Dash? I bought one. We need T-shirts that say Kill the Billable Hour with the FIR logo on it. Would anybody buy that? Let us know. We’ll pursue it. I’ll find out where Ann had her shirts made.

    Neville: Yeah, I like that idea. I like it. Excellent.

    Shel: If you work in public relations, you’ve probably seen the prediction that’s making the rounds right now. It sounds too good to be true. Gartner, the analyst firm whose pronouncements tend to get circulated in agency pitch decks for years, Gartner has declared that by next year, 2027, the mass adoption of artificial intelligence and large language models as a replacement for traditional search will drive a doubling of PR and earned media budgets.

    Now, what would drive this surge in PR spending, you ask? Well, AI answer engines overwhelmingly favor non-paid sources. More than 95% of links referenced in AI-generated answers come from earned, shared, and organic owned content, with 27% originating directly from earned media. So if AI is where people increasingly go for information — and by the way, the data on that is striking; ChatGPT saw traffic surge 608% year over year between the first half of 2024 and the first half of 2025, while traditional search giants Google and Bing both slipped — well, then earned media becomes the engine of discoverability. And that, the argument goes, means organizations will pour money into PR to stay visible.

    Now, I want to be honest about the source here, because Stuart Bruce, someone whose thinking you and I have always admired and respected, Neville — Stuart has pointed out that this prediction originated in a blog post published by Gartner as part of a lead generation campaign promoting a webinar for chief communication officers, and that while it carries the authority of the Gartner brand, it lacks the evidence normally associated with their research publications.

    Frank Strong over at the Sword and the Script notes similarly that the prediction feels rushed. 2027 is barely more than eight months away and the path from “AI favors earned media” to “budgets actually double” is pretty far from certain. But I’m cautiously optimistic because the underlying logic is sound.

    If AI systems favor credible third-party sources and PR is the function best equipped to generate that kind of coverage, well then yeah, our work becomes more strategically important. But a Gartner webinar promo is not a Gartner research report, and we should resist the temptation to tout this prediction as if it were settled fact.

    Here’s what I actually want to talk about though. Let’s say the prediction is right. Let’s say the prediction is half right. Let’s just say budgets grow substantially. What happens to that money? Because there’s a pattern in this industry that I think we need to name directly. When good fortune arrives — a new platform, a new capability, a shift in the media landscape — agencies have historically been better at capturing the upside than at reinvesting in the profession. More revenue has meant more of the same: more accounts, more billable hours, more senior hires, not more rethinking.

    And right now, in the age of AI, there are two investments that I think agencies have an obligation to make if this windfall arrives. The first is genuinely rethinking the agency model in light of AI — not just adding a chatbot to the workflow, but asking the hard questions about what services still require human judgment, where AI can amplify capacity, and how to build new offerings around answer engine optimization. And by the way, a new billing model.

    Stuart Bruce notes that Gartner explicitly rejects the efforts of SEO and marketing companies to pivot into this space, recognizing that answer engine optimization requires communication-specific skills to balance stakeholder trust and platform requirements. That’s an opening for PR, but only if agencies actually build those capabilities rather than outsourcing them to MarTech vendors.

    The second investment, and this one matters a lot to me, is in rebuilding entry-level pathways into the profession. AI has already been eroding the grunt work that used to serve as the training ground for new communicators. As one analysis put it, the traditional deal of entry-level work — trading rote labor for mentorship — that’s dying. The learning curve is being automated, leaving early-career professionals stranded between AI agents and senior incumbents.

    If PR budgets double, agencies will have the resources to do something about this. They could create structured apprenticeship programs. They could invest in training that teaches new communicators not just to use AI tools, but to supervise and interrogate them. They could build the next generation of practitioners rather than simply eliminating the entry points.

    What I fear, and what I think is entirely possible, is that agencies will look at this budget doubling as a margin opportunity rather than a reinvestment opportunity. More revenue, leaner teams, higher profits. And five years from now, we’ll be asking where the next generation of PR professionals are going to come from.

    So yeah, the Gartner prediction may well be right. AI does appear to favor the kind of credible third-party earned coverage that PR generates. And that’s genuinely good news for the profession. But good news is only useful if you do something smart with it. Neville, you’ve been watching the agency landscape in the UK and Europe for a long time. When you see a prediction like this, do you believe it? And what’s your read on whether the industry will rise to the moment or just cash the check?

    Neville: I must admit, I did say when I saw the article, I don’t believe it. British TV viewers might recognize that phrase from a comedy show 20 years ago. I did follow a lot of what people were saying, and all I saw was bubble, bubble, bubble, hype. I didn’t see anything. What I saw was missing, meaning this was a marketing claim, as you mentioned, and Stuart Bruce wrote about that, and others have too, just pointing out this was a blog post from Gartner. There’s no data to back up any of it. There’s nothing cited. There’s nothing you could trust to prove or to give you confidence in repeating it. Yet that’s what everyone has been doing, repeating this as fact.

    The particular phrase that was repeated by Gartner and then mass repeated: by 2027, mass adoption of public LLMs as a replacement for traditional search will drive a 2x increase in PR and earned media budgets. But there’s no evidence behind that. Yet what we saw was mass repetition all over, LinkedIn in particular.

    I did read a worth-reading article by Stephen Waddington published on the 16th of March on his blog about this topic. And he’s critical. And I think his starting line is “when industry optimism outruns the evidence,” and therein is where we’re at with this. I’ve seen sensible voices — you, Stuart, another one — who are saying that if this is true, then this is what it could mean, this is what could happen. But it’s like a lot of things we see: the maybe, perhaps, could, etc. is kind of brushed under the carpet, where suddenly before you know it, this is what’s going to be happening.

    So I’ve not seen a huge amount of conversation about this, to be honest, except when this first appeared. That said, today I saw two posts on LinkedIn from people repeating this who obviously just came across the Gartner piece and they’ve reposted it.

    Shel: The long tail lives.

    Neville: Exactly. So Stephen goes into — he makes a point in his post about GEO, and I think that’s actually contextually good. He’s saying Gartner’s observation may ultimately prove correct. But the path from the insight to a doubling of budgets is far from certain. He says, GEO remains highly contested. I’ve seen others saying that too. The mechanics of how AI models select, weight, and attribute sources are still evolving. This is an era where budgets are being directed to support discovery work.

    So what needs to happen instead, he says, is a call to action, I suppose, to communicators. When you see this claim being made, please challenge the argument. And if we aren’t set to see a boom in public relations work, some of that investment will need to be diverted to ensure the sustainability of earned media. And that, to me, is a very sensible point to make.

    All of this is probably and in fact certainly is why I didn’t post about this on my blog. When I saw it, I was attracted to it thinking, this could be an interesting topic to stimulate some attention. Then I read it and started seeing others like Stuart saying, wait a minute. So I thought, no, I’m not going to join a hype bandwagon here without some further research. Therefore, it didn’t appear compelling enough to me to spend the time on it. Let’s see what emerges further from this, if anything. But like you said, Shel, if this turns out to be true, then happy days.

    Shel: Yeah, I doubt it myself. I think what we’re going to see is an incremental increase in PR spending as a result of this. And that’s going to be because we’re not going to see some mass revelation at the same time among all industry that, my God, we need to invest more in earned media so that we’re visible in search results that are now happening on LLMs instead of search engines. This is going to be gradual.

    One company is going to pick up on it, then another. But what I have seen ongoing, regularly, are new reports, new studies, new research coming out. It all validates that LLMs are in fact generating their search results based largely on earned media. And I think as people wake up to that and realize that if we want to be present in those results — it’s like showing up on the first page of Google search results — we want to be in the answer when somebody asks a question where our expertise, our thought leadership is relevant. Then you need to bolster your earned media.

    One of the things that worries me though about this bolstering of earned media is how many more press release pitches am I going to get? How many more press releases that have nothing to do with me or what I do are going to show up in my inbox? You’re going to see reporters pitched way more than they’re being pitched now. And there may be some blowback from this as a result of that. It’s like, hey, PR industry, back off — too much. So there’s also that to consider.

    Neville: Yeah, I agree. So don’t believe everything you read online is a simple thing here, and take time to pay close attention to what people are saying about this before you repeat anything. Just be clear in your mind.

    Shel: Yeah, I was also going to say that I think owned media, the stuff that you produce on your own website — I think a renewed emphasis on that. So you’re producing really interesting stuff that people start looking at. That counts, too. That’s one of the categories of media that was included in this research. So you don’t have to rely on earned media all that much if you can do a great job of producing that content.

    Neville: Good tip. OK, so earlier we talked about how work is priced. That was our piece about the billable hour. Now let’s consider how work is measured, because there’s another story that feels connected but from a different angle. The Financial Times reported that JP Morgan has started using technology to check whether the hours junior bankers say they work actually match their digital activity — things like keystrokes, meetings, and video calls. The bank says this is about well-being, about awareness, not enforcement, about making sure people aren’t overworked. And on the surface, that sounds reasonable.

    But when you look a bit closer, it raises some uncomfortable questions. What’s really happening here is a shift from reported work to observed work. Not what you say you did, but what the system can verify. And that’s where the reaction gets interesting.

    If you look at the comments on the FT’s post about this, there’s a very clear pattern. Some people see this as logical, almost inevitable. In a data-driven industry, of course you measure activity more precisely. But a lot of the reaction is skeptical, even uneasy. You see comments like, “this really screams we trust our employees.” “This is a classic case of measuring what’s easy instead of what matters.” “Big Brother is watching you.”

    And then there’s a more nuanced point that comes up repeatedly. Does this actually improve anything, or does it just change behavior? Because if people know they’re being measured on activity, they optimize for activity. More keystrokes, more visible presence, more signals that look like work — but not necessarily better outcomes.

    And that connects directly to the earlier discussion about billing. If AI is automating more of the actual work — the analysis, the modeling, the drafting — then what exactly are we measuring here? Time, activity, presence, or value?

    There’s also a deeper cultural question. Investment banking has long had a reputation for extreme hours. JP Morgan has already tried to address that, capping weeks at 80 hours, for example. 80-hour weeks. The days of 40-hour weeks are a distant memory, obviously. But if people were underreporting hours to stay on deals, then the issue isn’t just measurement — it’s incentives, it’s culture. Technology can surface that, but it doesn’t resolve it.

    So this opens up some bigger questions. Are we moving towards a world where all knowledge work is continuously monitored and verified? Does that improve trust or undermine it? And if both pricing and measurement are shifting at the same time, what does a fair day’s work even mean anymore?

    Shel: Absolutely. One of the things we keep hearing about AI is organizations are going to have to rethink things like workflows. And we’re talking about organizations that are not going to look at all in five years the way they do today because of AI. Are people thinking that it’s going to take 40 hours for somebody to do today what it took them to do before if all of that grunt work is being taken over by AI?

    On the other hand, I have seen that AI has increased the number of hours people are spending on their jobs. There’s some very recently released data on that, that they are more stressed now with AI in the picture. And if you’re putting in more hours, is this really an issue?

    I’m also always struck by, as you mentioned in the report, the lack of trust, the signal of the lack of trust that this sends. I’ve always felt that the availability of these tools that allow this kind of monitoring raises the question of, you know, just because you can, should you? And yeah, I don’t think that you should. I think there are better ways to determine whether your people are working, and looking at their outputs is the best of those. Have they delivered what you expected them to deliver?

    Because when you destroy the trust that you might have had, or perhaps you never had trust in your organization in the first place, if you have new hires who come in and find that they are being monitored in this way, they’re just inclined to find ways to cheat. I saved an article in my link blog not too long ago from the HR Digest about key jamming.

    The point on this was that if you have employees who are doing this, you have a bigger issue. But if you haven’t heard of key jamming, this is easily available products that remote workers use by putting them on their keyboards and it continually presses the key. So it looks to the software that’s monitoring like that keyboard is active, that employee is working these hours. They could be off doing whatever they want.

    I imagine that there are some keystroke monitoring software that have been updated to address this and want to make sure that they’re typing real words or real numbers and not just repetitively striking the same key. But then employees will figure out the next thing, or the companies that sell these products will figure out the next thing to make it appear that the employee is working.

    Better to build trust so that the employees will want to produce great work for the organization that they love working for than to destroy trust and implement these kinds of monitoring tools.

    Neville: So it’s interesting. JP Morgan is quite resolute in their defense of this, because as they say, they’re doing this to help junior employees not overwork. There was a case here where an intern at the Bank of America died in 2013, which the coroner said was linked to long working hours. And the anecdotal stuff has emerged constantly since then on people who are totally wrecked emotionally because of the hours they’ve got to work.

    To be fair to JP Morgan, they’ve responded to that at scale in the organization. The trouble is that nearly every comment I see that has commented on this is extremely skeptical about their true motive. So they’ve got a credibility problem to explain this well. They talk about this is about awareness, not enforcement, they say in their prepared statement. It’s designed to support transparency, well-being, and encourage open conversations about workload. They’re going to roll it out much more widely across their organization.

    The estimate is based on employees’ weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings. So people being people, and the thrust of part of the article is what some of these junior employees are doing to kind of be counted and get the checkbox that you’re doing okay to enable them to spend time on the deals that they’re trying to close. Whereas if they did this to the letter and reduced the hours, they wouldn’t be able to close the deal. So I get that. So they’ll find ways to work around this.

    And I think, is this inevitably what we could expect to see in every organization? Or surely the organization should approach this in a way that presents something to the employees that doesn’t encourage workarounds to get around these kinds of things. I don’t know. My sense is that we’re going to see a huge amount more of this kind of thing in service industry firms in particular, starting with banks, I suspect.

    Shel: I hope not. I mean, let’s take them at their word. Let’s say that this is their solution of having Big Brother looking over employees’ shoulders for the employees’ benefit. Like I said, let’s take them at their word. They don’t want employees overworking because they don’t want them dropping dead at their desks. Great. That’s a great thing.

    You do that by having well-trained managers who understand that their role is to set expectations and to display the kind of caring for the members of their teams that leads them to make sure that they’re not overworking. Where I work, we are working really hard in communications, in HR, and at the executive levels to develop this culture of managing where managers are checking in on employees to make sure they’re okay. We’re training managers on watching for signs of mental wellness distress among employees and then reaching out to them to say, hey, let’s take care of this, right?

    It sounds to me like JP Morgan would rather implement a Big Brother program than to have engaging managers, one of the pillars of employee engagement, I might add. Why do people leave organizations? 50%, according to some research, leave because of their boss. And you know, if you have this churn among your junior people, maybe that’s because you’re doing a piss-poor job of training your managers to be really good managers. And if you did that, you wouldn’t need to erode the trust of your employee base by implementing Big Brother systems.

    Neville: That makes total sense. I agree with you. But I’m wondering, maybe there’s something structurally amiss here. So for instance, the FT says in 2024, JP Morgan appointed a senior banker to oversee the well-being of junior staff. JP Morgan has since curtailed weekend work and also capped the working week for younger employees at 80 hours, typically based on self-reported numbers. That’s key, that last bit.

    This process has proved imperfect as some junior bankers misreport the hours they work. One issue is they declare fewer hours than they have actually spent to avoid being pulled from existing deals or to ensure they can still be added to new ones. So I would say, if we kind of know this kind of behavior is going on, what are we going to do to address it and try and bring them around to our thinking? But that requires structural change in the organization as to how you do all this.

    Shel: I have an answer. If AI is saving you money, use that money to hire more junior people so that nobody has to put in that kind of time. So staffing should increase as a result of the use of AI, not decrease, says I.

    Neville: Are you listening, JP Morgan? Well, yeah, no, that’s a fair comment. I think just reading a bit more about the FT piece, it focuses on the tech workplace surveillance technologies. So not necessarily AI doing this, although it must be in there somewhere.

    Shel: No, no, I understand. But if we’re using AI in the organization and it’s lowering costs because the rote work is being done by the AI, those savings could go to the additional staff. So nobody has to put in 80 hours.

    Neville: Yeah. Well, I think it’s a problem across the sector because the FT quotes Goldman Sachs, for instance: junior bankers on occasion have been pulled aside and told to rest when its internal electronic monitoring was triggered. Get that. That’s how they’re watching all the time.

    I think the comment someone made on the FT’s piece about, you know, we’re going to see more of this — I think we will. It is clearly not perfect. I’m reminded a little of some of the stuff I paid a lot of attention to a couple of years ago about surveillance in China and the surveillance society in China, where you are monitored constantly all the time by the state. And it doesn’t necessarily mean central government, but the local way you live — the town, the city — monitors everything you do: what you spend your money on, what time you get up, what time you get on the train to go to work, how you clock in, you swipe your card — all that.

    That’s something as part of their society and structure. We are probably heading that way, I would argue, in Western countries, notably in Europe, some European countries. I don’t know about the States, Shel, to be honest. I don’t really know whether this is likely to be kind of prevalent anytime soon. I wouldn’t be surprised if it is, particularly if it’s going to be done covertly as opposed to openly and transparently, which I think is likely in America.

    Shel: Well, mass surveillance has definitely been in the news in the US lately with Anthropic pushing back on the Pentagon’s insistence that they be able to use Claude for that.

    Neville: Yeah, I mean, we’ve got experiments going on here which make the headlines now and again, although no one seems to be unduly concerned, which is the police in some jurisdictions are trialing more facial recognition technology that is now far superior to what’s been done before, that scans people as a matter of course in any public place. That, I would say, is an inevitability. We’re going to see that.

    So what does that mean for organizations? I mean, that’s a broad avenue to go down, the discussion on that wide topic. But in an organization, it surely does become understandable, if not acceptable, that when you show up at the office to work — and by the way, that’s still a thing for many organizations, even though I’m now seeing in all the newspapers here that because of the war in Iran and the price of oil shooting up and all this stuff, there’s now talk about one way you can help to reduce energy usage is work from home and drive less and drive slower.

    So that kind of talk is now starting to permeate public discourse. So I wonder what difference that will make to any of this, because if we’re to see more and more people want to work at home, that’s reversing. Are we going to see a backlash from employers who demand people come to the office? I mean, these are just questions. I don’t have answers for those, but it’s part of the picture. We are facing this kind of change that has good points, I can see quite clearly, but it’s alarming the state we’re at with all of this.

    Shel: Yeah, just for a point of interest, yesterday I watched a video on YouTube. It was Senator Bernie Sanders talking to Claude. This is on YouTube. I’ll share the link in the show notes. He’s asking Claude questions about what AI can do in terms of this kind of surveillance, its monitoring of people. And Claude is very, very candid in its answers to Senator Sanders. It’s about 11 minutes. I think it’s really worth watching because it surfaces a lot of these issues, and as a society, I think we have to decide whether this is something we want in the workplace or in general.

    Neville: I agree. That’s interesting.

    Shel: Well, thank you, Dan. Great report. I have to admit that I have been neglecting my Mastodon instance. It’s called Mastocomm, C-O-M-M, for communications. I set it up when I figured that it was an easy thing to do and a great way to learn about how to establish an instance in the Fediverse. And I haven’t been taking care of it lately. And Dan, your report has inspired me to go back. I’ve been away so long, it wanted me to log in.

    But it’s still there. It’s still up and running, which means I still have money coming out of my checking account every month to pay the fee to the service I use to host it. So as long as I’m spending the money, I might as well manage that. So thanks for the reminder, Dan.

    Neville: Yeah, good report on that. I’ve not listened to your audio yet. But thinking about Mastodon, I don’t go directly to Mastodon. I haven’t been there this year. What I do is every time I post on Threads, it posts to the Fediverse. And so I do it that way. It’s cheating a bit because I’m not actually engaging with anyone there at all. But I get quite a steady stream of engagement back, people who like and so forth. And I do occasionally do the same myself via Threads. So it’s a lazy approach to doing it. But I’m okay with that because I’m present via Threads and that works well. And it’s a useful way of keeping in touch. If Threads is more likely to be your primary engagement channel rather than Mastodon, that’ll work quite well.

    Shel: If anybody’s interested in joining the Fediverse and being part of a Mastodon instance that is focused on communication, join me: mastocomm.org. I’ll look for you there.

    Shel: A professor at Syracuse University’s Newhouse School recently made a point that deserves to be heard beyond the J-school world. Jason Davis, who specializes in detecting disinformation, said the challenge today isn’t really about spotting fakes anymore. The AI tools are so good now that there just isn’t much that we can catch. To break the misinformation amplification cycle, people need to apply critical thinking before they decide to pass something on.

    Now that connects to something I’ve been watching closely, because the misinformation problem has moved well beyond being a journalism problem. It’s a business problem now, and that means it’s a communication problem. The scale is pretty significant. Deepfake incidents tracked globally surged from about 500,000 cases in 2023 to over 8 million last year. That’s a 900% increase in just two years. A recent executive survey found eight in 10 executives are concerned about AI-driven misinformation impacting their brand. Yet many admit their companies aren’t fully ready to detect or respond.

    A University of Melbourne/KPMG global study of 48,000 people across 47 countries found 87% want stronger laws to combat AI-generated misinformation. And a survey found that fewer than four in 10 Americans say that they can confidently spot AI-generated content, and 88% say it’s harder now than a year ago to tell what’s real online.

    So who’s fighting back and how? Sophisticated newsrooms — think the New York Times, Bellingcat, investigative outlets worldwide — are now using multi-layered verification: a combination of reverse image search, metadata analysis, and geolocation cross-referencing to authenticate content. Reporters are using AI itself as a detection tool, analyzing thousands of posts to detect bot behavior by identifying patterns in timing, repetition, and network activity.

    Beyond individual newsrooms, the Coalition for Content Provenance and Authenticity, that’s the C2PA, is building broader infrastructure. They’re backed by Adobe, Microsoft, the BBC, Google, Meta, OpenAI, and others. With that backing, they’ve developed an open technical standard that functions like a nutrition label for digital content, establishing its origin and edit history. The U.S. Cybersecurity and Infrastructure Security Agency endorsed this approach in January last year. Adoption is still limited, but the standard exists and it’s worth watching.

    There’s also a striking research finding from a field experiment with readers of the German newspaper Süddeutsche Zeitung. Exposure to AI-driven misinformation reduced overall trust in news, but actually increased engagement with highly trusted sources. As synthetic content proliferates, credibility becomes scarcer, and as a result, becomes more valuable.

    That finding has direct implications for us in organizational comms. A deepfake of your CEO, a fabricated press release, a manipulated earnings statement — these are no longer theoretical. A hacked news tweet in 2013 briefly erased $136 billion from the S&P 500. The tools to do something far more sophisticated are now consumer grade.

    Deepfake fraud attempts grew by 3,000% in 2023, and humans detected manipulated media only 24.5% of the time. So practically: monitor for impersonation of your executives and brand. This belongs in your communications infrastructure. It’s not just an IT thing. Establish a verify-first culture inside your organization. Have pre-drafted response templates ready for the scenario where fake content goes viral under your or your organization’s name.

    And invest in your organization’s credibility before a crisis arrives, because that research finding tells us audiences under information stress return to the sources they already trust. The newsrooms dealing with this are systematic. They document their processes and when they can’t definitively authenticate something, they say so. That’s the standard every comms team should hold itself to.

    Neville, I know you’re watching all of this from across the Atlantic where the EU AI Act is pushing content labeling into requirements under law by August 2026. Are organizations taking this seriously? And is this regulatory pressure in Europe making any difference?

    Neville: To your last point, I don’t think it’s making waves-type difference. Awareness is rising. I’m seeing more people talking about this topic online across Europe, here in the UK too. But I think it requires far more and more effective communication to bring the messaging home to people about this huge topic. So it’s early days.

    We’ve got debate continuing here in this country about online safety and all these other issues that kind of obscure some of the important details such as this, for instance, that does require further debate. Things that I pay attention to certainly are the broad debates about all of this, but seeing what people are doing. You mentioned some examples in your introduction about some media broadcasters in particular, what they’re doing to verify the veracity of content. I saw an excellent article the other day about what Wikipedia is doing in this area, because there’s a place that’s at high risk of misinformation and disinformation.

    But there’s no uniformity from what I’ve seen, certainly. There’s lots of homebrew solutions people are suggesting. There’s lots of good solutions some respected organizations are suggesting that you do, but there’s not a big groundswell of action on this yet, it seems to me. So I’d be interested myself even to hear what listeners in the UK and across EU countries have to say about what they’re seeing in this area. But I don’t see a huge amount of conversation going on about this.

    Shel: And I’d really appreciate, listeners, if you’re in organizations that are doing anything to identify misinformation and to catch it before it’s used or even redistributed — what are you doing? How are you going about that? Is there any infrastructure for this that’s being implemented? I’d really like to know because I think this is going to become a bigger problem faster than most people are aware of.

    Neville: Yeah, I mean, one thing I am seeing talk about that caught my attention quite dramatically is the amount of fake news in a broad sense, but misinformation, particularly about the war in Iran, the use of video that is simply fake. I’m also seeing the use of video that isn’t fake and being highlighted as the fact that it’s not fake.

    The reality though is that like most things you encounter online, how do you really know? And what do you do if you see something you think, I’m going to share that with my network? What do you need to do before you do that? Most sensible people will take those precautionary steps, the most fundamental of which: how do you trust what you’ve seen? Is the source credible? Is it a reliable source? If it’s a media property, or even before that, who else is talking about this?

    So these are things that I do as a matter of course now on almost everything I encounter online, particularly if I’m thinking of sharing it. I’ve yet to be caught out by not doing that. I make it a point, and partly it’s affected by the fact I’m doing less of that than I was before a couple of years ago, far less. I don’t post a lot on social networks, except stuff that I think is really interesting to share with people who follow me, or just because I feel like I want to share this because I think it’s interesting.

    And that works. No other heavy message behind any of this stuff. But I do carry out due diligence. And I think I do it reasonably well because I’ve yet to be caught out. Now, of course, someone listening to this might say, well, let’s test him out on something then. OK, fine.

    Shel: Now that we’ve heard you say this…

    Neville: So, right. Go for it and do that. Let’s see how we go. But I think this is the status of where we’re at. The changes that are happening because of the events that are happening, and the fact that these euphemistic bad actors are increasing — there’s more and more of them. We have events taking place in the world now, note what’s going on in the Middle East, that lend themselves to more of this. You’ve got to really do your due diligence on things that you might not have felt you needed to before.

    Shel: Yeah, and I think due diligence needs to go beyond the tools that can detect a deepfake. You’ve got to remember that people were sharing content that was disinformation before there was AI. So you run your algorithm, you put a video through a tool and it says, yep, this is real video, it’s not AI generated — but it’s claimed that that video is showing something from the Iran war when in fact the video was shot years ago during, say, the Iraq war, and somebody just grabbed that video clip and made the claim that this is from the current conflict. This happens all the time. It still happens today. It’s not from this weather event. That’s from that weather event five years ago.

    So we have to be diligent and not just rely on the tools, and we have to come up with some solutions. I remember years ago when we reported it here, when blockchain was still a topic of conversation in digital circles, Ike Pigott had recommended a tool. I don’t remember exactly how it worked, but as you shot video, it was recorded into the blockchain, which would authenticate its authenticity. And that became a way for people to see that it was genuine video and not manipulated somehow and not a deepfake — it was actually shot on a video camera and uploaded as a blockchain record in real time. So there are potential solutions out there. We need to get serious about implementing them in this profession.

    Neville: Yeah, that’s a good example of the blockchain one, although that was pretty niche. That was pretty out on the edge, as it were. There were lots of things like that that just didn’t survive and disappeared. Things change, things evolve, and people are trying new things. I don’t mean bad guys, but in a good way. So let’s see how that goes. But you need to keep vigilant on all this.

    And by the way, when I mentioned misinformation, I wasn’t thinking of deepfakes and that kind of thing. It’s more the fundamental stuff that crosses your screen every day or your newsfeed or whatever it might be, saying something that someone says something or someone has done something and it’s interesting and fine. Don’t trust it until you verify it. So if it’s on the BBC or CNN or any other broadcaster, you know, Süddeutsche Zeitung newspaper, the one you mentioned earlier, Shel — that’s a good bet that it’s OK.

    But you know what? Some media recently have been caught out with fakes. So it still pays to do your own due diligence, particularly if that content is something you’re going to use in a way that could embarrass you if it turned out to be fake or simply wrong. So it’s worth doing. Most people think that they don’t have time to do that. You have to make the time. This is part of your future.

    And AI has a role here. Arguably, you could say, well, I need to do this myself. No, you don’t really. Your favorite chatbot, if you trust it, it knows enough about you, and you can still verify stuff. It does the searching and finding the sources. You then check them. It can check them too, but you still have to do that. It just makes it easier for you to do that. You still want to do that work, by the way. There’s no magic bullet or shortcuts here. So it’s worth it. You learn a lot doing this, too. I’ve learned huge things from doing all this myself. And it’s been very, very useful.

    Neville: So there we are. OK, let’s talk about bot traffic. In an interview at South by Southwest, literally a week or so back, with TechCrunch, Cloudflare CEO Matthew Prince said that by 2027 — so as you pointed out earlier, we’re eight months away basically — bot traffic will exceed human traffic on the internet. That’s not entirely new in principle. Bots have always been part of the web. But what he’s describing is a change in scale and function.

    Now think about this: Cloudflare — I don’t have the exact number, but don’t they manage like 30% of all the traffic on the web that goes through some of their servers somewhere? They do caching. They do all sorts of interesting things with people’s data. I use it on my blogs. I’m sure we use it on the FIR network. I mean, it’s part of the plumbing of the internet now. And you might remember a month or so back, Cloudflare was all over the news because they were hit by a distributed denial-of-service attack or some such that took large chunks of the internet offline because people like Amazon and some of those big properties use Cloudflare too. So it’s quite something.

    Anyway, historically bot traffic has been relatively stable, around 20%, largely driven by search engine crawlers. What’s changed is the impact of generative AI, said Prince. His point is that AI agents behave fundamentally differently from human users. A person researching a purchase might visit a handful of sites. An AI agent performing the same task might visit thousands of sites. This is not incremental growth. It’s a multiplier effect — not just more traffic, but a different kind of traffic.

    That has consequences at three levels: infrastructure, economics, and behavior. First, infrastructure. If AI agents generate orders of magnitude more requests than humans, then the web becomes a system that increasingly serves machine activity. Prince talks about the need for new infrastructure, including ephemeral sandboxes where agents can execute tasks without overwhelming the broader network.

    Second, economics. The commercial web has been built around human attention: visits, impressions, and clicks. If a growing share of traffic is non-human, that model doesn’t just weaken — it becomes misaligned with how the web is actually used.

    Third, behavior. Prince characterizes this as a platform shift comparable to the move from desktop to mobile. If that’s right, then the way information is discovered, consumed, and acted upon changes fundamentally — and not necessarily by humans.

    That raises a set of implications that go beyond infrastructure. If machines are increasingly intermediating access to information, then visibility is no longer just about being found by people. It’s about being processed, selected, and used by systems. This links back to the earlier themes. We talked about how AI changes what work is worth. We followed that with how AI changes what and how work is measured. Here, it’s changing the environment in which both of those things happen.

    So this is less about traffic and more about control — who or what is actually navigating the web. Which leads to some important questions. If AI agents are doing more of the searching, what does it mean to be visible online? If traffic no longer equates to human attention, how do organizations think about value? And if this is indeed a platform shift, what replaces the current models that underpin the web?

    Shel: These are interesting questions, and I think that this is ultimately more a matter of evolution, just like the web was, even the internet before we had the graphical interface of the web. It’s a shift in what’s doing what. But at the end of the day, all of those bots have been deployed by whom? I mean, I have agents out there. These are just set up on Claude and on ChatGPT that are going out and doing searches and coming back and giving me reports. Me, I’m a human, last time I checked.

    And I’m using the results of the work that those bots do. So these agents are proxies for the humans who need something done with this information, whether it’s delivering a report or creating a spreadsheet or what have you.

    These are human-deployed bots. I mean, ultimately in every case, a bot has been deployed by somebody for some purpose. And I think having your content out there for those bots to find so that those results are delivered back to the human and you’re visible there — all it’s doing is reducing the need for the human to sit there for hours doing the searching and just having the AI go out and do the searching for them and delivering back results. But those results are still being used by people.

    So this doesn’t concern me all that much, unless there’s something going on here that I’m not aware of with agents suddenly creating themselves to go off and engage in activities that have no human behind them, in which case we’re in the realm of science fiction. And I don’t think we’re there yet.

    Neville: Well, that could be the case, although I think there are signs that we might be heading in that direction. Thinking about what we talked about in the last episode on that darker place that you cited, Ethan Mollick talking about what happens if it all gets taken over by an AI — that question applies here as well. You’ve got the AI agent instructing other AI agents. And I read someone talking about that very topic in quite a compelling way that this is already happening. So that wouldn’t surprise me one bit at all. So we’ve got to think of that too.

    Shel: Yeah, now we’re talking about two different things, right? I mean, we’re talking about bots and agents here as an umbrella topic. But the fact that bots have been deployed to search and report back is one thing. Bots that are creating content is another, which is actually the topic of my next report.

    Neville: Got it. Yeah, you’re absolutely right. We were talking about bots. So they are deployed by humans to achieve certain things. I guess I could project that out and say what happens in a darker place where the bots are deployed by AI agents unbeknownst to the human. I mean, I’m not Skynetting here, by the way. This is just projecting the thought out. And I welcome these kinds of discussions on “what if” when we see what’s happening now. It immediately makes you think, yeah, but what if? So this is part of how we generate good conversation about this kind of topic.

    But it is interesting. I think the way in which Matthew Prince kind of framed it — that someone does a search for something in a retail outlet online and he or she may do a couple of dozen searches, but the AI instructs a bot to do this and that bot goes out and there’s thousands of searches all in a short period of time. And you suddenly see, wow, the scale of this is absolutely phenomenal. And that’s really, I think, part of what Prince is arguing: when bot traffic overtakes human traffic, we are confronting a scale of an order of magnitude that is driven by the system.

    Is he ringing alarm bells here? I’m not sure that he is or not, but he’s looking at the need for a new kind of infrastructure to take care of this. And I think that’s actually a good avenue to explore.

    Shel: Probably. I mean, Google has always used bots to go out and scour the web — called them spiders back in the day. But they only sent out the one and it found everything, those millions and millions of sites. And all that information resides on Google’s servers. So when you’re doing a search, it’s not going out onto the web, right? It’s looking in its own data centers and giving you those results. And those spiders, those bots, are always out there, always running, but just the one from Google.

    Now with AI, you’re asking it to go out in real time and scour the web. So yeah, it’s sending out thousands in order to do essentially the same work that Google did. And then it brings you back the result in that narrative output that you get. So that’s why we’re seeing so many more bots out there. Is this a problem? I’m not an engineer, so I don’t know.

    Neville: No, I don’t know either. I’m not sure it is a problem. But I’m cognizant, paying attention to what Prince is saying, that none of this is incremental growth — it’s a multiplier effect. And could it be that we’re at risk of everything grinding to a halt? Is that what he’s saying?

    The consequences I listed — infrastructure, economics, and behavior — make sense, and they are connected. The generating of orders of magnitude more requests than humans are capable of doing is partly the thing. And I can see that. The web is then a system that increasingly serves machine activity, which is how he’s making that connection. He talks about the need for new infrastructure, including sandboxes where agents can execute tasks without overwhelming the broader network. That makes a lot of sense.

    Shel: Yeah, I like that. Nothing wrong with that.

    Neville: I use sandboxes myself, so I understand conceptually what that means. The economics about it all, where the behavior is now totally different. Visits, impressions, clicks — that’s what humans did, or still do largely. But as he argues, if you’ve got a growing share of this, increasingly more non-human traffic according to Prince, that model doesn’t just weaken — it becomes misaligned with how the web is actually used today.

    OK, does that mean we need to change that? Well, yes, it does. How do we do that? Well, that’s part of the bigger debate. Behavioral characteristics — he’s likening this to the move from desktop to mobile. If he’s right, then the way this is all discovered, consumed, and acted upon changes, not necessarily by the humans, changed by the AI. Is this a bad thing? I don’t know. Maybe he’s just ringing the hand of caution and ringing the cowbell. Maybe that’s it. But it certainly is provocative what he’s suggesting.

    Shel: Yeah, certainly there’s absolutely going to be more bot traffic on the internet. That’s inescapable with all of this. Maybe the LLMs, the labs, find ways to confine the searches so they’re searching relevant sites to reduce that traffic. I don’t know.

    Neville: Yeah. So let’s hear about your connection piece then about this. Assume that humans are not at the heart of all of this.

    Shel: Sure. And you mentioned Ethan Mollick earlier. I mentioned this in an earlier episode a couple of weeks ago, I think. But he said that when he posts something, he can tell that about 70% of the comments that are left on his posts have been generated by bots. And it’s weakened the value of LinkedIn to him, which is discovering smart people with intelligent thoughts and perspectives. And 70% of that is now being generated by bots.

    So we have bots that are now creating content. So you talked about bot traffic — stay with that theme, but focus more on the content. A new peer-reviewed study just published in the Journal of Public Relations should be required reading for anyone responsible for managing an organization’s reputation and messaging. The paper is titled “Social Bots as Agenda Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging.” And it came to my attention by way of Bob Pickard, one of Canada’s most respected PR practitioners and someone whose commentary on this research carries special weight. More on that in a minute.

    The research, led by Philip Arceneaux at Miami University, along with colleagues from the University of Arizona, University of Texas, and University of Florida, is the first study in public relations scholarship to empirically measure how social bots interfere with organizational messaging. The authors note they found no prior PR research addressing this specifically, which is remarkable given how long the threat has been visible.

    The study analyzed nearly 900,000 tweets generated during Ohio’s 2022 midterm elections. What the researchers found was that social bots successfully influenced the agenda formation process, most heavily in negative tone and most notably among the election campaigns. Bot messaging was most effective at influencing attribute salience — that is, how issues were framed and characterized — driving primarily negative sentiment. The bots were the strongest influencers of campaign agendas with measurable downstream influence on press and public discourse.

    Here’s the distinction that Pickard zeros in on in his commentary. And I think it’s the most important insight in the entire body of research. The bots didn’t control what was discussed. They controlled the tone in which it was discussed. And as Pickard writes, that may be a more dangerous lever. Your organization puts out a carefully crafted message. The bots don’t need to invent a counter-narrative. They just need to inject enough negativity around yours that the frame gets corrupted before it can set.

    A primary strategy social bots adopt is the creation of information disorder — information ecosystems filled with suspicion and distrust that erode public confidence. And as Pickard observes, this has a direct downstream effect on communications decisions. Distorted inputs produce distorted decisions. If your social listening is picking up manufactured sentiment — bot-driven negativity masquerading as genuine stakeholder concern — you may be prioritizing the wrong issues, reacting to the wrong pressures, and in some cases, misreading your stakeholders entirely. Some of what looks like groundswell may just be a bot farm.

    The asymmetry that Pickard describes is sobering. A small network of automated accounts can systematically degrade the messaging environment of a well-funded organization with a full communications team. And as lead researcher Arceneaux put it, it’s not natural selection anymore — it’s artificial selection by who controls the most bots.

    A survey cited in the study found that 51% of leading communication professionals already reported that social bots present a clear threat to organizations and their reputations. And practitioners view social bots as the most pressing ethical challenge in public relations. And that was before generative AI made bot-produced content dramatically more convincing.

    Why does Pickard’s voice matter here particularly? Well, when he blew the whistle on the Chinese interference at the Asian Infrastructure Investment Bank in 2023, hundreds of pro-China bots on Twitter targeted him with insults, accusing him of being an American agent, a white supremacist, and a neocolonialist. The pattern the researchers describe in the study — rapid negative amplification, coordinated framing, and agenda hijacking — isn’t abstract to Bob. He has operated inside of it.

    And his observation that state-directed information operations seem to understand the bot asymmetry better than most corporate communications leaders is a pointed challenge to our profession.

    The study recommends stronger media relationships, better investment in bot detection tools, and a return to traditional polling as a signal less susceptible to manipulation. And that’s sound advice. And on the practical side, research on bots’ impact on public discourse suggests their influence is most pronounced in the early stages of an issue — before credible sources establish the dominant narrative. Which means getting your authentic message out fast, before the negative frame hardens, is now a genuine strategic imperative, not just a good practice.

    There’s also a real-world corporate illustration of this dynamic, and it’s one that we talked about more than once. In 2025, research found that roughly half of all the posts about the Cracker Barrel controversy in its early days were driven by inauthentic bot activity. So a minor design story artificially elevated into a culture war flashpoint before human communicators could get their footing. That’s the playbook now.

    Neville, I know you follow this activity and information disorder closely and you’ve watched platform governance response in Europe in particular. What do you think? Are social platforms doing enough to protect organizations from bot-driven agenda hijacking, or are communication professionals essentially on their own here?

    Neville: I don’t think they’re doing enough. They are doing some, the platforms, but their attention is not on this at all. I think any organization, any corporate communicator, needs to recognize the fact that — regard it as if you’re on your own, that you need to take the steps that are needed.

    Reading Bob’s piece on LinkedIn, an interesting turn of phrase he uses here, talking about “hands-on combat experience versus synthetic competitors gaming the algorithm in contested environments” is now extremely important. So make of that what you will, but you need to be up to speed with these developments. There are plenty of places you can get information from, get insights and guidance from as well.

    I think, though, that this is the fundamental point which Bob Pickard makes in his piece: some communication leaders are still fighting the last war. This new research soberly explains new realities of possibilities of modern PR battlegrounds.

    Now, I have not read the article, Shel, that you had in our Slack channel. I mean, it’s 34 pages of eight-point type, it seems to me. It’s big. So I would get my AI assistant to summarize the whole thing for me and give me the highlights. I haven’t done that. I think I will do that even to get a good understanding of this.

    It seems to me that this is yet another example of the changes that are happening, whether we like it or not, that we have to pay attention to as communicators. We’ve touched on quite a few in this discussion today. Here’s another one. So I can’t really comment more than that, Shel. I’ve not read the report, which I am going to do. But I think his intro to the piece on LinkedIn is good. It’s a good introduction to it. And it then makes it easier to try and wade into it. Although I think for most communicators, some kind of summary is what they’re going to need rather than trying to read the whole thing.

    Shel: Yeah, well, the bottom line is, I think, pretty simple. If you release some information and it’s in somebody else’s interest to shift the tone in order to control the agenda, then those bots are going to be deployed very, very, very quickly and create that content that changes the framing of what you started. Because you had a communication goal, and you as a communicator need to be prepared for that. And you need to have processes in place — and these are new processes and new workflows — to make sure that what you want people to understand is the message that fixes in people’s minds before these bots can come in and mangle your message, because that’s what’s happening pretty routinely now.

    Shel: And that will be a -30- for this episode of For Immediate Release. We do want to remind everybody again, because we mentioned it earlier, comment on what you’ve heard. If you have thoughts, if you have any experiences to share, if you have questions, share them. The place most people are doing that these days — and in fact, every comment that we shared today was left on the LinkedIn posts where we announced the availability of a new episode. So if you follow Neville or me on LinkedIn, you will get those notifications of those new episodes. That’s the place to comment.

    You can always comment on the show notes. That’s where people used to do this all the time. Remember blogs when people used to comment on blog posts? You could do that. You can send us an email to [email protected].

    Shel: Boy, am I overloaded with spam in that account, but absolutely not one comment in the last month. One of the things I find in that email account is any voicemail messages that you have left. Just by going to [email protected] and clicking Send Voicemail, and you can send us your comment that way — we’ll play it. We’d love to have another voice on the show. So you can also send us an audio that you record, just attach it to an email and send that to [email protected].

    We also have the FIR community on Facebook. And there are lots of places that you can tell us what you think. We’d love it if you did. And we will share that on the next monthly long-form episode. That next monthly long-form episode is coming on Monday, April 27th. Neville, you and I will record that on Saturday, April 25th. So we will have our monthly episode then. Between now and then, not this week, but starting next week, we will have our shorter-form one-topic weekly episodes. It should be three or four of those before we get to the April long-form episode. And that will in fact be a -30- for this episode of For Immediate Release.

    The post FIR #506: Battle of the Bots! appeared first on FIR Podcast Network.

    23 March 2026, 7:05 am
  • 21 minutes 13 seconds
    FIR #505: Social Media’s Big Shift

    In FIR #505, Neville and Shel dig into Hootsuite’s Social Media Trends 2026 report, which argues that social media is no longer just a communication channel — it’s morphing into a search engine, cultural radar, and real-time research tool. They explore what it means for communicators when younger audiences treat TikTok and Instagram as their primary discovery platforms, and when Google itself starts indexing social content. The conversation also tackles “fastvertising” — the growing pressure on brands to react to cultural moments within hours — and whether that speed actually translates to bottom-line results or just burnout.

    The discussion takes a provocative turn when Shel raises Ethan Mollick’s warning that public forums are being systematically overrun by machine-generated content, with research suggesting one in five accounts in public conversations may be automated. They weigh the AI paradox facing communicators: generative AI has become table stakes for social media production, yet 30% of consumers say they’re less likely to choose a brand whose ads they know were AI-created. Neville and Shel agree that social media can serve as both a publishing channel and a listening tool — but only if human-to-human communication can survive the rising tide of bot-generated noise.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, March 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Shel: Hi everybody, and welcome to episode number 505 of For Immediate Release. I’m Shel Holtz.

    Neville: And I’m Neville Hobson. Social media might be going through its biggest change since the rise of the news feed, and it’s happening quietly. Platforms that started as places to connect with friends are increasingly acting like search engines, cultural sensors, and even market research tools. It’s been a while since Shel and I talked about social media on the podcast, and frankly, that’s partly because the conversation often feels repetitive. New platforms appear, algorithms change, someone declares the death of Twitter again. That’s the kind of format that we seem to be following. But every now and then, a report comes along that suggests something deeper is happening. Hootsuite’s new Social Media Trends 2026 report published last month argues that social media is no longer just a communication channel. It’s becoming something much broader — part search engine, part cultural radar, and part market research lab. Take search, for example. Younger users increasingly treat platforms like TikTok or Instagram as search tools. Instead of Googling “best coffee shop in London,” they search TikTok and watch short videos from real people recommending places to go. And now Google itself has started indexing Instagram posts and surfacing short-form social video in search results. The line between social media and search is starting to blur. At the same time, we’re seeing a strange tension around artificial intelligence. According to the report, most social media managers now use generative AI tools every day to write captions, brainstorm ideas, edit images or video. But audiences are increasingly suspicious of content that feels automated or synthetic. More than 30% of consumers say they’re less likely to choose a brand if they know its ads were created by AI. So brands are in a curious position. AI is becoming essential behind the scenes, but the content that performs best often needs to feel unmistakably human. And culturally, social media itself is fragmenting. The report points to what it calls Gen Alpha Chaos Culture — absurd memes, distorted audio, and intentionally chaotic editing styles that dominate TikTok among younger audiences. Meanwhile, older audiences — that’s you and me, Shel — are gravitating towards almost the opposite aesthetic: nostalgic references to the ’80s and ’90s, calming, cozy content, and even posts about slow living and digital detox. I do some of that, but I also do the other stuff too. So it’s hard to pigeonhole me, I have to tell you that. So reading this report left me wondering something slightly provocative. Maybe social media isn’t really social anymore. If discovery is driven by algorithms and search behavior rather than who you know, perhaps these platforms are evolving into something else — systems that surface information, culture, and trends in real time. Which raises the bigger question for communicators. Are we still thinking about social media as a place to publish content? Or is it becoming something much more powerful — a tool for understanding behavior, culture, and trust as it unfolds online? Which leads me to a first question. If people increasingly discover products, places, and even news through TikTok or Instagram rather than Google, does that fundamentally change how communicators should think about social media?

    Shel: I absolutely think so. I mean, this shift deserves way more attention, I think, than it’s been getting from marketers and communicators. We’re looking at a fundamental change in how people get information. The rise of social media as a primary search engine — this is not a fringe behavior. In 2026, this is going to be the dominant reality for a massive swath of the population. Brands are just starting to get their arms around AEO. And now they’re going to have to apply the same efforts to social content that they’ve historically reserved for traditional search engine optimization. So captions and alt text and subtitles aren’t going to be nice-to-haves. These are the bedrock of discoverability. And there’s a specific angle here for those of us in internal communications too. I mean, if employees are using TikTok and Instagram the way they used to use Google to make personal decisions, we have to ask if that behavior is bleeding into their professional research. And there’s data that suggests it is. A company called Alpha P-Tech did a study and found that 75% of B2B buy-side stakeholders are going to use social media to gather information about vendors and solutions this year. So this isn’t just a consumer trend. This is a professional evolution too.

    Neville: Yeah, I would agree with that, I think. I mean, there’s a lot to unpack here from Hootsuite’s report. And I think it’s, you know, I throw out thoughts that occurred to me when I was reading this. It talks about something I’d not encountered before, whether you have — fast, if I pronounce it right, even it’s a manufactured word — fastvertising. So the word “fast” with “vertising” from advertising, right? Fastvertising. The question actually is, does the fastvertising culture create more risk for communicators, things moving so fast, where, according to Hootsuite, brands now feel pressure to react to trends within hours, if not less than that even? So reacting too quickly can lead to tone-deaf, poorly thought-through posts, I would say, as does Hootsuite, in fact. Are we moving into a world, then, where social media requires newsroom-style judgment and governance? What do you think?

    Shel: Well, yes, and I think we’ve been there for a while. We remember the — what were they called — the war rooms that social media teams for various brands were using. Remember Oreo during their 100 Days of Oreo several years ago now. And they had a newsroom that was looking for trends so they could take the one that was planned based on somebody’s birthday. And if something major happens, they could just switch it up and really quickly knock one out that was relevant to what was in the news. I remember they had one cookie that had black and white stripes. And it turned out that it was related to a National Football League referee strike that had just been called. So yeah, I think brands have gotten accustomed to monitoring trends and knocking stuff out fast. Another one was, I think it was the tequila with the chocolate beans, that they pulled that out of Google Trends and said, let’s get that out there while this is a hot trend. And it was up and it did really well, that particular post from whatever tequila company it was. So this is something that I think brands, a lot of them anyway, are already accustomed to. I think the scale that we’re talking about here though is probably not good. I think if you’re reacting to just what you happen to see and not running some analytics, you risk being tone-deaf by jumping into a conversation that turns out to be not that big a deal. You risk saying something that is incongruous with the tone of the conversation because you rushed. I guess the only benefit you get out of this is the fact that everything’s moving so fast that in six hours, no one’s going to remember what you did.

    Neville: Yeah, Hootsuite talks about this in the context of fastvertising. Obviously, the word du jour for this thing that’s been around a while is disrupting the content calendar. To that point, online brands are now responding to cultural moments within hours, not days. 22% of marketers feel pressure to respond to trending topics or viral moments daily or a few times per week. And 37% feel a high level of burnout from that pressure, according to data from Adobe quoted by Hootsuite. Timing matters, they say. If you’re quick, you’re in. If you’re slow, you’re a laggard. But you still can’t prioritize speed over quality. And they cite 39% of marketers say their content flopped due to rushing. So being the fastest isn’t necessarily the answer. Yet that’s what a trend seems to be building further, that fast is the important thing, being fast.

    Shel: But the thing is that even if you’re adept at this and you really have your finger on the pulse or you have a big enough team that there’s somebody there who has their finger on the pulse and can craft just the perfect post to be part of whatever this is that’s going on at the moment. And let’s say it’s a big success. It goes viral. Does that translate into sales? Does that translate into bottom-line results or are you just one of the cool kids participating in the conversation? I’d like to see the correlation at least between being fast and being good at being fast with this fastvertising and getting the kind of results that pay the bills and incentivize the leaders of organizations to fund these kinds of efforts.

    Neville: So being fast and furious isn’t necessarily the solution. OK, I get that. Let’s talk about algorithms. Hootsuite talks a bit about this, which I found interesting. If algorithms prioritize behavior over followers, which is what Hootsuite is saying is a trend that’s developing, does brand loyalty matter less? That reminds me of, I think, a very related theme to this we discussed probably five or six episodes ago about brand loyalty mattering less in certain circumstances. So if content reaches people based on micro-behaviors, asks Hootsuite, rather than follower networks, the old idea of building large follower communities might be fading. So they ask, is the new game about relevance rather than loyalty?

    Shel: Well, I think relevance has always been at the heart of what we do. I mean, you can build a huge base of followers, people who have opted to get your content, and they’re very casual about what they see. We saw this data in the early days of the news feeds as a forum for brands — was that you had one brand that had a million followers, but they hardly ever came back and looked at your stuff again. And then you had a competing brand with fewer followers, but they were constantly engaging with the brand. Which would you rather have? So I think having brand loyalty can be valuable if you’re engaging that base rather than waiting for them to get your content in their feed because that’s growing less and less likely. If that’s an effort you’re not willing to make or you don’t think will pay off, then yeah, brand loyalty is going to take a backseat to getting the impressions through other means. But again, I want to see that line that connects those impressions, even that engagement, with your bottom line. Because I’m not convinced that this participating in this fastvertising environment has produced those results. I have not seen a study that shows that it is.

    Neville: I think it’s interesting the kind of direction of travel that this seems to be pointing towards where one of the findings talks about engagement is no longer a big-deal metric. Even impressions aren’t. And that comes down to the ROI from micro-audiences. So it’s not clearly defined yet. This is still evolving. But it is shifting without doubt. So another point from the report. It suggests social listening and analytics are becoming real-time intelligence systems. They’re asking, is social media now the fastest research tool organizations have? Could social media become one of the most valuable organizational listening tools, asks Hootsuite, not just a publishing channel? That’s a big shift for communicators, they argue.

    Shel: Yeah, I mean, and it has been for a while. And I think the type of activity we’re seeing now probably elevates that value. But is it the most important listening? You know, I don’t know. I think asking direct questions in a survey and a focus group still have tremendous utility. But if you’re looking for real time, again, this is — I think particularly valuable say in a crisis. And this could be a brand crisis rather than an existential corporate crisis. But finding out what people are thinking, what the sentiment is in close to real time can be ridiculously valuable in that kind of situation. But I also think that you have to remember that the people who are engaging in this kind of activity on social media is not necessarily the majority of your target market. There are a lot of them who maybe don’t do this at all, or they’re passive consumers of the content that’s posted online and not actively creating any of that content. And what do they think? I think, you know, if you put all of your eggs in the basket of what the number of people who are producing this content are saying and say, well, this is going to drive the perception of our brand, it’s going to drive sales — yeah, I think that’s very, very risky. I think as an element of a marketing program, of a communication program, it can be useful. But the way some organizations are looking at this, apparently from what I’ve seen, is that this is now the be-all and end-all of their online marketing. I’m not sure that’s wise. I think still, you know, publishing thought leadership pieces on LinkedIn still has some value, right?

    Neville: That’s — that’s probably — I hesitate because I’m trying to remember. I read something about this just the other day which says it’s not at all — has no value doing that on LinkedIn, and it gave some reasons. I don’t remember, obviously not compelling enough to make me recall the article or the author. But I’ve seen many different opinions and different takes on, you know, where’s this all going that it’s hard to settle on one, I suppose, which I think makes this quite an interesting landscape for discussion, really, to get some good debate going. It’s interesting Hootsuite’s look at the role AI is playing in all of this, where they say AI might make social media less interesting. The paradox around generative AI they talk about and the report saying AI is now table stakes for social media production. But I’m wondering if that actually makes social media less interesting. If everyone has the same tools, generating ideas, writing captions, and editing video, doesn’t that push everything towards the same tone and style? Doesn’t it kind of make everything just bland as hell?

    Shel: Yeah, slop, right? I think there’s two ways to look at it.

    Neville: Well, not necessarily slop — not necessarily slop, just the sameness across the board.

    Shel: Yeah, I think that’s how some people look at slop. But there are a couple of ways to look at this. One of them is just the data. According to the Hootsuite report, 30% of consumers say they’re less likely to choose a brand if they know the ads were created by AI. We saw this, by the way, with the Super Bowl, where there was backlash aimed at the ads that were generated with AI. So there’s a practical takeaway for communicators, and that’s use it for infrastructure and not as the voice of the organization. The moment your messaging starts to sound like it was spat out by a machine, you’ve sacrificed the very thing that social media was built for, which is trust. But I want to take this to a bit of a darker place than what was covered in this report. This was a post by Ethan Mollick on LinkedIn. And he shared a perspective that I think should make us stop and think about this. He’s concerned that the public forums are being systematically overrun by machine-generated content. He said that while established voices can remain in broadcast mode, we’re losing that serendipitous discovery — the ability to find smart human insights in the comments on LinkedIn and presumably on Facebook and the other networks. And I’m not being an alarmist here. The University of New South Wales did a study and found that in a simulated social media campaign, more than 60% of the content was generated by competitor bots, surpassing 7 million posts. There was a peer-reviewed analysis last year that estimated about one in five accounts in public conversations were automated, and we’re seeing the emergence of AI overwhelm. That’s a label for a phenomenon where the sheer volume of machine-generated noise leads to a systematic breakdown in trust. Now consider Multbook. You remember Multbook. This is the platform where the AI agents from whatever that is called this week — it’s gone through so many name changes — but it was the tool that you could set up on a computer that would deploy agents. People were running out and buying Mac Minis to do this because they didn’t want it on their computer having access to their bank accounts and the like. Somebody built Multbook where the agents that were deployed by this thing could interact with each other and we, the humans, could sit back and observe. And Professor Mollick wondered whether LinkedIn was going to become Multbook with a LinkedIn logo. We’re building the infrastructure for bot-to-bot communication. And we should be asking whether human-to-human communication can survive at all. If all of these shifts in what the Hootsuite report says we can now use social media for — in a year, if it’s been overrun by AI content, and we’re talking about the bots creating original content in response to posts and then creating posts — we’re not going to be able to use it for much of anything at all.

    Neville: Yeah, that’s taking it to quite a dark place, Shel. I don’t think that’s the likeliest outcome, of course. So let me circle back to the first question then. When we started this conversation, we asked this one then. So are we still thinking about social media, generally speaking, as a place to publish content, which is what we currently do, right? Or is it becoming something much more powerful — a tool for understanding behavior, culture, and trust as it unfolds online? How do you see it?

    Shel: We’ll see.

    Shel: The answer to that is yes. I mean, it can be both things. I would not recommend that brands and companies stop publishing content, especially when people are starting to use these tools for search. I mean, man, you talk about TikTok being used for search. I do. When I’m looking for a new place to have breakfast, because I love a good breakfast, I’m not going to the usual places. I’m not going to Yelp. I’m not going to Google. I’m going to TikTok because I want to see somebody who created a video of this awesome breakfast they had at some restaurant a mile and a half from me that I’ve never heard of. So if you want to be discovered that way, you better have the content there. But we have to start using it in these other ways as well, for as long as that’s a viable thing to do.

    Neville: I agree with that.

    Shel: Well, in that case, that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #505: Social Media’s Big Shift appeared first on FIR Podcast Network.

    17 March 2026, 9:21 pm
  • 22 minutes 45 seconds
    FIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag

    Shel and Neville examine a troubling trend gaining momentum across corporate America: AI washing — the practice of attributing layoffs to artificial intelligence when the real reasons are more complex. The discussion centers on two high-profile cases. Block CEO Jack Dorsey announced a 40 percent workforce reduction, crediting AI tools, despite three prior rounds of cuts that had nothing to do with AI and pushback from former employees who say the moves look like standard cost management. Meanwhile, Oracle is cutting thousands of jobs, not because AI replaced those workers, but to fund a massive data center expansion that Wall Street projects won’t generate positive cash flow until 2030. Meanwhile, a new Anthropic labor market study adds context, finding limited evidence that AI has meaningfully displaced workers to date—though hiring of younger workers in exposed occupations may be slowing.

    Neville and Shel dig into what this means for communicators who may be asked to craft layoff messaging that overstates AI’s role.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, March 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Neville: Hi everyone and welcome to For Immediate Release. This is episode 504. I’m Neville Hobson.

    Shel: And I’m Shel Holtz. Let’s talk about something today that should be keeping every communication professional up at night. We’re in the middle of a wave of layoffs where AI is being cited as the cause and the data suggests that in many cases that explanation is somewhere between incomplete and pure fiction. That puts communicators in a genuinely difficult position. You may be asked to help craft messaging that you have good reason to believe is misleading.

    Shel: That’s a violation of codes of ethics. The stakes here are pretty high. We’ll explain all of this and what communicators should be doing about it right after this.

    Shel: Let’s start with the numbers. News of the Oracle layoffs broke just last week amid news that the U.S. economy lost 92,000 jobs in February. And into that bleak backdrop, two major stories landed almost simultaneously. First, Block. Jack Dorsey announced that the company is cutting its staff by 40 percent, more than 4,000 people. The reason, according to his letter to shareholders, intelligence tools. Dorsey framed this as inevitable and even proactive saying, and this is a quote, “I think most companies are late. Within the next year, I think the majority of companies will reach the same conclusion.” But here’s where it gets complicated. Block had already undergone three rounds of layoffs since 2024 before this one. And in a previous round, Dorsey claimed that they were being made for performance reasons. AI, as far as I can tell, wasn’t mentioned at all, despite the fact that the same tools he now credits were already available and being used by employees. Former employees and analysts pushed back pretty hard on Dorsey’s assertions. One former Block employee wrote that the cuts “read like standard prioritization and cost management, not AI-driven reinvention.”

    Shel: And another analyst was blunter, saying the vast majority of these cuts were probably not due to AI. Then, as I mentioned earlier, there’s Oracle, which is planning to axe thousands of jobs among its moves to handle a cash crunch. That cash crunch was created by a massive AI data center expansion effort. Now, this is a different kind of AI-related layoff. It’s not AI replacing these workers, but rather, we’re spending so much money building AI infrastructure that we can’t afford to keep paying these people. Wall Street projects Oracle’s cash flow will go negative for the coming years before all that spending starts to pay off in 2030. That’s workers losing their jobs not because AI took their role, but because their employer’s betting the company on AI and needs the payroll budget to fund that bet. Both cases are AI related. Neither is quite the story it appears to be on the surface. And that is the problem. And it has a name: AI washing. The term describes companies blaming layoffs on AI when the circumstances may be more complicated, like attributing financially motivated cuts to future AI implementation that actually hasn’t happened yet. A Forrester report argues that a lot of companies announcing AI-related layoffs don’t have mature, vetted AI applications ready to fill those roles.

    Shel: Molly Kinder at the Brookings Institution makes the investor logic explicit. Calling layoffs AI driven is a very investor-friendly message, especially compared to admitting that the business is ailing. Even Sam Altman, whose company is arguably the reason any of this is happening in the first place, acknowledged all of this. He said, “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do.” Now the data complicates the picture even more.

    Shel: Anthropic just released a major labor market study. It’s worth your attention. They find limited evidence that AI has affected employment to date. Their new “observed exposure” metric, which tracks what AI is actually doing in real workplaces, not what it could do theoretically, shows that workers in the most exposed occupations have not become unemployed at meaningfully higher rates than workers in AI-proof jobs. There’s one exception worth watching: suggestive evidence that hiring of younger workers, particularly ages 20 to 25, has slowed in those occupations exposed to AI. The good news in the Anthropic research also serves as a warning. The reason we’re not seeing mass displacement yet is largely because actual AI adoption is just a fraction of what AI tools are feasibly capable of performing. The gap between theoretical capability and real-world deployment is wide today, but it is closing.

    Shel: So what does this mean for communicators? Well, here’s the ethical minefield. When executives AI wash their layoff announcements, they may be revealing that they view AI as a means for eliminating jobs, and that could cause workers not to trust or even sabotage their future plans for AI adoption. Employee concerns about job loss due to AI have already skyrocketed from 28% in 2024 to 40% in 2026, and 62% of employees feel leaders underestimate AI’s emotional and psychological impact. Anti-AI sentiment is real and growing, and every time a company uses AI as a convenient cover story for financially motivated cuts, it feeds that sentiment, making the actual work of responsible AI adoption harder for everyone.

    Shel: For communicators who are handed layoff messaging that overstates AI’s role, the guidance from ethics researchers is worth holding on to. Rather than vague claims about AI transformation, companies should provide specifics. How many positions are directly attributable to automation of specific functions? And how many reflect shifting market conditions and strategic realignment? Investors can handle complexity and so can employees. The Block situation is a canary in the coal mine, but perhaps not in the way Jack Dorsey intended. It’s a warning about what happens when the narrative outruns the reality, when the story told to shareholders diverges from the story experienced by the people being let go. Our job as communicators isn’t to make bad news sound good, it’s to make complicated truth navigable. That truth has never been more important or more difficult than it is right now.

    Neville: A lot to unpack in that, Shel. I mean, absolute tons. I was curious, actually. One thing you mentioned, I think it was a quote, where you talked about, you know, referencing Sam Altman, where you said, you mentioned the phrase “AI-proof jobs.” What are those? I don’t think anything is AI proof.

    Shel: Well, I think a gardener is an AI-proof job. A drywall installer is an AI-proof job. These are the ones that an AI can’t do. Even if you look at the definition that they’re throwing around for artificial general intelligence, it’s any cognitive task that a normal person could perform at their computer. And there are a lot of jobs. I mean, my son-in-law is a plumber and AI is not going to take his job anytime soon. So those are the AI-proof jobs.

    Neville: That could be a good topic for a separate discussion, I think. I’ve got some different views. Anyway, one thing that struck me in everything you said is how often AI is framed as inevitable, as Jack Dorsey noted, almost like the technology made the decision. But organization leaders are choosing how and when to deploy AI. So do you think those leaders risk removing their own accountability when they say “AI made us do this”?

    Shel: I think they do, even though that accountability is to the shareholders and they’re performing what they think the shareholders will like. I think what they risk losing is their credibility with shareholders who may find out down the road that they haven’t actually replaced these jobs, that they didn’t have the AI tools or agents in place to perform the duties of the people they let go, or have somehow rejiggered their workflows so that AI is picking up the slack for the people who are gone. But in the meantime, you can see the other reasons that they may have wanted to reduce the workforce, whether it’s on the balance sheet or competitive headwinds or whatever it may be. I’ve seen other arguments in various forums that Dorsey actually did this for other reasons and you can point to what those reasons might’ve been. And just blaming AI—as somebody said, the analysts and the investors like hearing that you’re cutting your workforce while maintaining your productivity and your current levels of production. That’s great. We want to see more of that. But if you dig under the surface, you look under the covers, you find out it probably isn’t true.

    Neville: Yeah, I think that’s a big issue, frankly, the misrepresentation of this as a matter of course. And I’m just reflecting a bit on one of the webinars that Sylvie Cambier and I did for ABC recently on ethics and AI. That this features in that, in terms of dishonesty, misrepresentation, disinformation almost. So another thought I had was, if we accept that some of this is AI washing—and in fact, I say a lot of this is AI washing. It’s a great phrase, AI washing, great term. So the short description I found—Wikipedia has got a page on this, a huge description. But companies make overinflated claims about the use of AI. That’s as simple as we’re describing it, which is basically what you said in your intro.

    Shel: I love it, yeah.

    Neville: So my question is, what would responsible communication about layoffs actually look like? If communicators are faced with, I guess, continuing incorrect facts or rather the incorrectness of this, should organizations be separating out the reasons? In other words, providing even more information—automation, restructuring, investment—rather than rolling everything into an AI transformation story? Would that be better, do you think?

    Shel: I think it would. And I think it’s incumbent upon the communicator to not just push back, but I think first to ask questions. You’re asking me to communicate this layoff as AI related. We’re laying off this many people. Can we demonstrate that those functions are being replaced by AI systems that are ready to do those jobs? Or is there another way that we can demonstrate that we can prove that we no longer need these people because of AI? Is there anything that people are going to look at in our performance, in our numbers, in the competitive landscape that they would be able to point to and say, look, that’s going on too. Doesn’t that have something to do with these layoffs? And to point out what the risks are of simply attributing everything to AI.

    Shel: Both from getting caught when you haven’t actually replaced those people with AI functions—and you have people inside who are more than happy to blow the whistle on these kinds of things, especially when they fear that their jobs are next up for elimination because of all of this—and what it does to the internal situation. As I pointed out, people who see that jobs are being taken because of AI? Well, I’m certainly not going to support more AI in this company. I’m going to do everything I can to undermine that. So I think it’s our job to push back and to make sure that what we’re communicating is accurate. If there’s a way that we can communicate what leadership is looking for, great. If not, I would push back and say, we cannot do this. This is going to—do you want to engage in crisis communication in three months? Because that’s where we’ll be.

    Shel: I mean, it’s what Dorsey’s doing now. He’s going around doing damage control interviews. So is that what you’re interested in? Damage control down the road? You know, we’ve been communicating layoffs for decades and decades and decades without having AI to blame it on. And somehow we managed to survive. Let’s just tell the truth.

    Neville: Yeah, yeah, it strikes me as a very peculiar situation in a sense that if you look into it, the facts are quite clear. And why would you kind of obfuscate the picture and wrap it all up into something you can blame the technology for? So I guess you’ve answered the question I have next for you, which is, if companies keep using AI as the explanation for layoffs—I mean, it’s truly extraordinary what you quote from Dorsey in particular—where he blames AI effectively, even when it’s not the full story. Do you think that risks creating a broader backlash against AI inside organizations? Could the messaging itself end up making AI adoption harder?

    Shel: I think so. As I mentioned, I think employees are not going to be tripping over themselves with enthusiasm to get this all working. It’s like training your own replacement. But I also think there’s the risk of alienating customers. Investors are one thing, and analysts, that’s one thing. But customers who sympathize with employees or see this callous disregard for the welfare of employees may look for companies that are taking a more humanistic approach to all of this, even as they’re implementing AI, looking for ways for AI to partner with employees. I’ve always been kind of surprised that organizations—maybe I’m not so surprised—that organizations see this as a way to continue doing exactly what you’re doing now with fewer people as opposed to adding staff without having to hire more people in order to do more than what you’re doing now, in order to produce more, in order to innovate more. It seems to me that what Wall Street rewards is growth. And if you maintain your head count and really seriously look at the adoption of AI as a way to grow the company, you’re going to grow by leaps and bounds.

    Shel: And it seems what most organizations are happy doing is what we’re doing now with fewer people. I don’t understand how that is something that Wall Street would want to reward beyond the fact that they’ve always rewarded layoffs.

    Neville: Yeah, yeah. So I think—to me, communicators are being placed in an ethical bind, almost an impossible situation. They sit between, in this case, executive messaging, employee experience, public scrutiny. And when those perspectives diverge, which is clearly what’s happening in some of these organizations, the communicator becomes the person responsible for navigating the ethical tension. I wouldn’t want a job in a company like that, I have to say, if I was the communicator.

    Shel: I think it’s gotten a little easier simply by virtue of the fact that AI washing is now a recognized thing. As you noted, there’s a Wikipedia page on it. There are articles now on it. And I think it’s easy to put data together on this and take it to leadership and say, is this how you want to be positioned? Is this how you want to be perceived? This is what’s going to happen if you pursue this policy, if you pursue this course.

    Shel: And I think that’s an argument that’s easier to make than something nebulous like employees are going to reject this, and we might get caught down the road when people look at what’s actually going on in our books.

    Neville: So clearly that didn’t happen in Jack Dorsey’s company then.

    Shel: No, I don’t know that AI washing was as well recognized.

    Neville: Well, no, I mean, a communicator taking findings to senior management saying, “You sure you want to do this?” I guess that didn’t happen. Or maybe they haven’t got a communicator.

    Shel: Well, maybe they don’t, or maybe the communicators are just joined at the hip with Dorsey and the leadership team.

    Neville: It’s possible. So what about Oracle? You mentioned Oracle. They’ve got to lay off thousands of people. They’ve got a cash crunch from the massive data center expansion effort. Something else to add to the mix, I suppose. Did they succeed in buying the movie studio and CBS and CNN, all that stuff being wrapped up?

    Shel: Well, that’s Oracle’s—that’s Larry Ellison’s son. The founder—his son, David, is with Skydance, which is the company he owns. So it’s just a familial connection. It’s not something Oracle’s actually investing any money in. But here’s my question. If you’re cutting thousands of jobs in order to have more cash available to spend on data center expansion, which, by the way, is facing immense resistance now in the U.S.—it’s going to be incredibly hard to get the permits to build new data centers, given the public blowback on this. But even if they could, what did those thousands of people do for a living? I imagine they did customer support. I imagine they did development of Oracle’s database products and cloud products.

    Shel: And who’s going to be doing that now? I would expect with that many jobs being cut, you’re going to see a degrading in customer service and subsequently customer satisfaction. And I don’t understand how that serves Oracle, which is not going to be back in a positive cash flow for five years. So I tend to think that this is a really stupid decision. You should be doing what the AI labs are doing and going out and finding new investors to support this expansion if you think it’s going to be worth all that, as opposed to cutting the jobs of the people who do the work that your customers of today rely on.

    Neville: So what Oracle will probably do, though, is you’ll be talking to an AI when you phone customer support. And you’re probably doing that anyway. But this will increase exponentially. Technology is improving all the time. And I think many people won’t object to talking to an AI if it doesn’t act like what we think AIs act like in that kind of role, if it acts more human-like. So it’s an upside-down time.

    Shel: No doubt. Yeah.

    Neville: I think to me the issue that bothers me is how people dress this up. People in positions of leadership in companies—they should know better, and maybe they do know better, but they’re being pressured, either self-pressured or by the circumstances of their roles and the kind of company they work for, to deliver the results that those above them are demanding. And so they are party to this kind of contract, it seems to me. And yet, isn’t it inevitable that this is going to happen and we’re going to see more and more of it? What do you reckon?

    Shel: I imagine that we are, because leaders see other leaders and other companies doing it. And they see Wall Street, at least for now, rewarding it. And they’re going, hey, we could do that. Doesn’t make it right. Doesn’t mean it’s the long-term best answer for the organization. And I think ultimately—we talk about trust in just about every episode at some level—and this is going to erode trust. It’s going to erode trust among your employees. It’s going to erode trust among your customers. And at some level, you risk being caught AI washing.

    Neville: Not good.

    Shel: And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag appeared first on FIR Podcast Network.

    10 March 2026, 11:42 pm
  • 17 minutes 20 seconds
    FIR #503: When Your Boss Throws You Under the Bus

    The president of the International Olympic Committee didn’t have an answer to a question posed to her at a press conference on the final day of the 2026 Winter Olympics. Or to another question. Or to yet another. Ultimately, she suggested, on camera, that someone on her communications team should be fired. In this short midweek FIR episode, Shel and Neville look at the fallout, what both the president and the head of communications might have done differently, and the possible long-term consequences.

    Links from this episode

    The next monthly, long-form episode of FIR will drop on Monday, March 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Shel Holtz: Hi, everybody, and welcome to episode number 503 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson. Something happened at the Winter Olympics last month that set off a fierce reaction across the communication profession and it wasn’t about sport. During the final daily press conference on the 20th of February, IOC president Kirsty Coventry was asked a series of geopolitical questions. Questions about Russia and doping.

    Comments linked to Germany and 2036, questions about senior sporting figures engaging in wider political activity. On more than one occasion, she said she wasn’t aware of the issue and visibly looked towards her communication team. At one point, she went further and suggested that perhaps someone should be dismissed. That’s the moment that shifted this from a routine press conference stumble into something much bigger. We’ll explore it right after this.

    What makes this especially interesting is the context. A few days after the press conference, Coventry had been widely praised for her leadership at the Milan Cortina Games. Reporting from the AP on the 23rd of February described her first Olympics as IOC president as having good overall success, noting the intense political pressure she faced and the way she engaged directly with athletes during the Ukraine controversy. That controversy centered on Ukraine’s skeleton racer, Wladyslaw Hraskiewicz, who competed wearing a helmet memorializing athletes and coaches killed in the Russian invasion of Ukraine. The gesture drew scrutiny and diplomatic tension around whether it breached Olympic neutrality rules. Coventry chose to meet him face to face at the track and later became visibly emotional when discussing the issue with international media. That moment was widely interpreted as defining her emerging leadership style: empathetic, athlete-facing, and willing to engage directly.

    The games were even described as giving a taste of tougher challenges ahead as the IOC looks towards Los Angeles 2028. In other words, this wasn’t a presidency in crisis. There was goodwill, momentum, a sense of forward motion. And then one live moment reframed the entire narrative. Being caught off guard isn’t unusual. No leader can know everything. No briefing pack can anticipate every question.

    But that’s not the story. The story is what you do in that moment. Do you acknowledge the gap and commit to follow up? Do you bridge to principle? Do you calmly say, I’ll get back to you once I’ve reviewed the details? Or do you turn publicly and imply that your team has failed you? The communication reaction was swift and pointed. LinkedIn filled up with variations of the same message. Accountability sits with the principal. Praise in public, criticize in private. You can’t outsource responsibility.

    But I think there’s a deeper discussion here. Yes, leaders must own the podium. Yes, public blame undermines trust. But this also raises questions about executive readiness, about the contract between leadership and communication, and about how fragile reputational capital really is. Those geopolitical questions were not obscure. They were predictable fault lines around an organization operating in an intensely political global environment. Were holding lines prepared? We don’t know. Was she fully briefed? Possibly. Did she ignore it? Also possible. And that’s where this moves beyond a single awkward exchange.

    In high-performing organizations, the relationship between a leader and their communication team is built on shared risk. The team prepares the ground, the leader absorbs the pressure. If something goes wrong, it’s owned collectively and dealt with internally. The world stage doesn’t create dysfunction, it amplifies it. So rather than pile on, I think this is worth examining as a case study.

    Here’s what intrigues me. This wasn’t a leader already in trouble. She had just been praised for navigating intense political pressure, engaging directly with athletes, and projecting empathy and maturity in a complex environment. There was goodwill in the bank. And yet one live moment—a few sentences, a glance towards her team, a suggestion someone might be dismissed—reframed the entire narrative. That tells us something about how fragile leadership capital really is.

    So, Shel, let me start here. When a leader appears unprepared on a global stage like that, who actually owns the failure? Is it primarily the principal? Is it the communication team? Or is it a breakdown in that relationship we often describe as the unwritten contract between leader and comms? And perhaps even more provocatively, at what point does a communication team have a responsibility to push back and say, you’re not ready for this podium?

    Once a story becomes internal blame rather than the issue itself, you’re no longer managing the moment. The moment is managing you. So what do you make of all this, Shel?

    Shel Holtz: Well, I think it’s a two-way street. I think both sides failed here. Coventry herself is the IOC president, has been for nearly a year. She should have been aware of these issues from a governance standpoint. It’s not a question just of media prep.

    Neville Hobson: Mm-hmm.

    Shel Holtz: As one commentator put it, it’s not the PR team’s job to inform the president of things she should know simply from a management perspective. So I don’t think there’s a problem with piling on here a little bit, but throwing your team under the bus publicly is not the approach to take. I think there are some lessons that I hope Coventry learns here. She turned what should have been a really unremarkable closing press conference into a global story about dysfunction at the IOC. The press conference actually became the story and that’s the exact opposite of what any comms professional looks to achieve with this type of press conference.

    The right move from Coventry would have been to acknowledge the question, note that she’d want to look into it, and then commit to following up. That buys time for her without revealing this gap between what she knows and what she should know. And she could have gone behind closed doors afterwards and she and Mark Adams, the guy who’s in charge of the communications team, could have had whatever conversation she wanted to about briefing protocols. But when a leader publicly humiliates their comms team, it poisons that relationship and makes future counsel less likely—the exact opposite of what the communication requires.

    Neville Hobson: Yeah, I agree. I mean, there’s lots—and everyone with an opinion has been doing it on LinkedIn in particular. PRWeek had a really good assessment, which is where a lot of this kicked off. But what you’ve outlined is what she should have done, basically. And I totally agree. I think an additional comment I’d add to that is demonstrating in a sense the executive ownership of the issue overall. She could have said something like, you know, ultimately the responsibility sits with me. That would have dampened down anything, would have changed the tone of the entire story. She didn’t do that.

    But there’s also, I think, worth pointing out what the PR team should have done. And maybe they did do it. Let’s add that caveat. We don’t actually know who did or didn’t do what.

    Shel Holtz: She may have not read a briefing book that was given to her, right? That’s exactly right.

    Neville Hobson: Or she may or she may not have been given one. Now, that’s the other element. We don’t know. So this conversation therefore gets more interesting if we exempt from that point of view.

    So the issues raised weren’t obscure. And I agree with you that the geopolitics of it all is actually in the kind of daily news. If she reads newspapers she would have seen a lot of this discussion that would have been kind of an alert to her. So the issues were not obscure. Russia and doping, geopolitical symbolism of 2036 Germany—including one of the questions she got: why was the IOC merchandise website selling t-shirts with emblems of the 1936 games in Nazi Germany? And she said, I wasn’t aware of that kind of thing. Infantino and Trump—that’s a dynamic between the president of FIFA and Trump. Predictable lines of questioning.

    Shel Holtz: Okay.

    Neville Hobson: A robust prep document—what might that have looked like? Well, likely hostile questions. Again, briefing her on the kind of questions she might get. Top-line holding statements. Thirty-second bridges. “If you don’t know” language. If that didn’t exist, that’s a team failure. If it did exist and she ignored it, that’s a leadership failure.

    Shel Holtz: Yeah, well, she said, “I was not aware” on three separate occasions in one press conference. I can’t remember ever hearing about anything like that before. And every time she said it, it compounded the damage from the last one.

    Neville Hobson: Yeah, she did.

    Shel Holtz: And even if she wasn’t briefed, a seasoned executive would have bridged to what she could say: the IOC’s position on political neutrality, their commitment to anti-doping integrity, the process for evaluating future host city bids. She could have leaned on what she did know and then offered to get back to people with more specific answers later, but she just kept revealing what she didn’t know. This is a textbook case for why pre-briefing documents and Q&A anticipation matter and what you would expect from your comms teams. And before any high-profile press event, they should have—and again, we don’t know whether she was or not—but she should have gotten a briefing book that covered not just what you want to say, but what you’re likely going to be asked, with a—

    Neville Hobson: Precisely.

    Shel Holtz: With Germany 2036 on the centenary of the Nazi games, a sitting IOC member appearing at a Trump political event, and an NYT investigation into Russian doping. These are all foreseeable questions during a closing Olympic press conference. You know, I don’t think that Mark Adams gets to skate here. He’s a 17-year veteran of the IOC. He used to work at the BBC, ITN, and Euronews and the World Economic Forum. He’s earning 420,000 pounds a year for this job. When the Germany 2036 question came up, his response was simply that he hadn’t seen it either. And I’ve got to tell you, for someone at that level and that salary during the final press conference of the Olympic Games, I think it’s an understatement to call that a significant lapse. The media monitoring function alone should have flagged those issues.

    Neville Hobson: Yeah, I agree. I mean, there’s a ton of questions I’ve got here that might be rhetorical now, actually. But nevertheless, let me rattle these off and see what you think. Can a comms team ever fully protect an unprepared leader—that’s one. Where does responsibility truly sit? And that’s something that could occupy the rest of this podcast discussing that one alone.

    But that’s a question that I wonder: is this part of a broader trend? I mean, some people—notably on LinkedIn, so let’s just put that out there—have hinted, if not explicitly noted, the increase in executive blame-shifting, diminishing personal accountability, and a culture of scapegoating communication. Is that anecdotal or systemic? That’s the kind of rhetorical question, I suppose.

    Should comms professionals refuse to front leaders who are not ready? It takes a brave person to do that, and maybe Mark Adams isn’t that person, I don’t know, but that’s pretty provocative. Is there a professional duty to push back from the comms people? At what point do you say you’re not ready to do this live? Is this a case study in leadership under geopolitical complexity? The Olympics isn’t sport alone—it’s politics, it’s war, it’s symbolism, it’s national legitimacy. A modern IOC president must be politically literate at the highest level.

    So there’s lots there. I guess you could summarize it, I suppose, in the sense: when a leader is caught off guard on the world stage, who owns the failure? Because let’s just go back to what actually happened. She was caught off guard—not once, twice, three times at least. And one of those three times, the last one, is when the bus emerged under which she threw the PR team by saying someone needs to be dismissed.

    So when a leader is caught off guard on the world stage, who owns the failure—the principal or the communication team? Question.

    Shel Holtz: Well, I think you can look at it both ways here. I think people who are looking to shift that blame to the PR team need to recognize that it’s not like she had no experience. She has governance experience. She chaired the IOC Athletes Commission. She served on the executive board. She held a ministerial portfolio—

    Neville Hobson: Yep.

    Shel Holtz: —in Zimbabwe. But this suggests that she hasn’t either fully adapted to the demands of the presidency or her team hasn’t adequately supported the transition. But they need to get on the same page because I think one of the bits of fallout on this is questions about the IOC’s ability to handle the bigger issues that are coming up in the LA 2028 summer games.

    Neville Hobson: Mm-hmm.

    Shel Holtz: They’re going to be exponentially more complex politically. And if the team can’t handle media monitoring and an executive briefing during a winter games, how are they going to manage the geopolitical minefield of an Olympics in Trump’s America? Adams has already been linked to potentially leaving the IOC for a role with UK Prime Minister Keir Starmer. He was one of Starmer’s best men at his wedding. So there’s another layer of instability, which I guess means if she needs to fire someone, he’d be a good candidate.

    Neville Hobson: Yeah, there’d be a vacancy there, wouldn’t there? So, I mean, some of the comments on one of the many LinkedIn posts I saw do talk about—let’s call it a possible deeper misalignment between leadership and communication at the IOC. Questions people are speculating—because this is all speculation, I would hasten to add. Did this show that there was a pre-existing tension between her and the comms team?

    Shel Holtz: Yeah.

    Neville Hobson: I mean, I watched the video of her being asked those questions and there was no hesitation in her glance to the comms team where they were sitting, I guess, to say, I wasn’t aware of this. And she did it again. And then the third time it was, someone needs to be dismissed here. So was there some kind of tension? Did the team try to brief her and just get ignored? Is this a case of leader-comms misalignment long in the making? I mean, these are all unknowns. I’d like to think not.

    She’d only been in the job a year. She had got all this praise because of how she had handled all these other things going on. That doesn’t mean therefore that this is not right. Something happened clearly, and we witnessed the kind of jaw-dropping moments when she said “I wasn’t aware of this” three times and basically said someone should be fired. So overall the tone is not good. The optics are dreadful.

    I’ve not seen any further reporting on this since the initial flurry. It’s all kind of—

    Shel Holtz: Well, you know, if your executive gets surprised at a press conference, I think that’s a process failure that can be fixed. But if your executive blames you for it on camera, I think that’s a leadership failure that may not be fixable. You know, the relationship between a communications professional and their principal depends on mutual trust, honest counsel, understanding that you protect each other publicly and hold each other accountable privately. And that’s the opposite of what happened here. So I don’t know whether there was tension before this happened or not, but there is certainly tension now and I’m not sure it can be repaired. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #503: When Your Boss Throws You Under the Bus appeared first on FIR Podcast Network.

    2 March 2026, 6:58 pm
  • 1 hour 44 minutes
    FIR #502: Attack of the AI Agent!

    In the February long-form episode of FIR, Shel and Neville dive deep into an AI-heavy landscape, exploring how rapidly accelerating technology is reshaping the communications profession—from autonomous agents with “attitudes” to the evolving ROI of podcasting. The show kicks off with a chilling “milestone” moment: an autonomous AI coding agent that publicly shamed a human developer after its code contribution was rejected. Also in this episode:

    • Accenture’s move to monitor how often senior employees log into internal AI systems, making “regular adoption” a factor in promotion to managing director
    • The “2026 Change Communication X-ray” study reveals a record 30-point gap between management satisfaction and employee satisfaction with change comms.
    • The PRCA has proposed a new definition of PR, positioning it as a strategic management discipline focused on trust and complexity. However, Neville notes the industry reaction has been muted, with critics arguing the definition doesn’t reflect the majority of agency work. Shel expresses skepticism that any single definition will be adopted without a global consensus.
    • Addressing a provocative claim that corporate podcast ROI is impossible to prove, Shel and Neville argue that the problem lies in measuring the wrong things. They advocate for moving beyond “vanity metrics” like downloads and instead tying podcasts to concrete business goals like lead generation, recruitment, and brand trust.
    • As consumers increasingly turn to LLMs for product recommendations, brands are “wooing the robots” to ensure they are cited accurately in AI responses. Neville asks if we are witnessing a structural shift in reputation or just another optimization cycle.
    • In his Tech Report, Dan York explains why Bluesky is having trouble adding an edit feature, Russia’s blocking of Meta properties, criticism of Australia’s teen social media ban from Snapchat’s CEO, YouTube’s protections for teen users, and more on teen social media bans.

    Links from this episode:

    Links from Dan York’s Tech Report

    The next monthly, long-form episode of FIR will drop on Monday, March 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Shel Holtz: Hi everybody and welcome to episode number 502 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson.

    Shel Holtz: And this is our long form episode of For Immediate Release for February 2026. It is an AI heavy episode. Artificial intelligence is accelerating. I mean, just this morning, I read that WebMCP, a protocol developed by Google and Microsoft, is now in Chrome, makes it easier for agents to navigate websites. Google has launched Pamele photoshoot. Take any photo of a product and turn it into a marketing-ready studio or lifestyle shot. Google’s launched Lyria 3. It’s right in Gemini. You type a prompt or upload a photo and it’ll produce a 30-second music track with auto-generated lyrics, vocals, and custom cover art.

    And at the same time, I think it was in the New York Times I read the heads of the big AI labs are actually starting to worry about this growing anti-AI backlash. This is the landscape against which we’re podcasting today. And I’m sure nobody will be surprised that most of our stories have to do with the convergence of AI and communications, but not all. We have a follow-up report to our story on the PRCA’s proposed definition of public relations and report on the ROI of podcasting. But first we want to get you caught up on some For Immediate Release goings-on. So Neville, let’s start with a recap of our episodes since the January long form show.

    Neville Hobson: Yeah, we’ve done a handful, five. So our lead story in the long form 498 for January was published on the 26th of that month was the 2026 Edelman Trust Barometer. Trust, Edelman argues, hasn’t collapsed, but it has narrowed. They use a word called insularity that defines, in a sense, withdrawal of people. We took a close look at this year’s findings and applied some critical thinking to Edelman’s framing of the overall topic and we got a comment to this one show.

    Shel Holtz: We did from Andy Green, who says we need to put the idea of trust in a broader context. The Dublin Conversations identifies trust as one of the five key heuristics for earning confidence. Trust by itself doesn’t have agency. It fuels earned confidence, which is defined as a reliable expectation of subsequent reality. It’s earned confidence that underpins social interactions, and we need to recognize more.

    Neville Hobson: Okay. Then.

    Shel Holtz: By the way, I have not heard of the Dublin Conversations. Do you know what that is?

    Neville Hobson: Yeah, you take a look at the website. It’s an initiative Andy Green started some years ago, gathering like-minded people to have conversations about the way PR is going and so forth. There’s more to it than that. So worth a look. Okay, so in episode 499 on the second of February, we considered the PRSA’s choice to remain silent on ICE operations in Minneapolis, explaining its position in a letter to members.

    Shel Holtz: Okay. Take a look.

    Neville Hobson: We unpacked that decision, discussing where we agree, where we don’t, and what ethical leadership could look like in moments like this. Big topic, and we have a comment.

    Shel Holtz: Ed Patterson wrote: Many thanks, I’ve been echoing the same thing. PRSA, IABC, PR Council, Page, global firms, crickets. With others, we’ll continue to amplify this.

    Neville Hobson: Good comment. In For Immediate Release 500, we discussed the growing risk of AI-enabled abuse in the workplace, why it should be treated as workplace harm, and what organizations can do to prepare. This isn’t really a story about technology though. It’s a story about trust and what happens when leadership, culture, and communication lag behind fast-moving tools. And then the world is drowning in slopaganda, we said in For Immediate Release 501 on the 16th of February, and companies are reportedly paying up to $400,000 salary for storytellers. We explored the surprising shifts in the AI narrative and asked whether Chief Storyteller is a genuine new C-suite function or a rebranding of strategic communication. And we have comments.

    Shel Holtz: We do. Wayne Asplund wrote that there are two things that really hit me about this story. First up, the world doesn’t need more comms people who have outsourced their job to AI. The skills that got comms pros where they are today are critical and we should guard against giving them away. The second thing is the nature of the stories the tech sector wants to tell. All I’m hearing from them at the moment is white-collar jobs are dead in 18 months. Don’t bother going to law or medical school because you’ll be redundant before you graduate and the like. I’m starting to feel like the future would be a lot brighter if people stop trying to sell it out in search of short-term headlines. Neville, you responded to that. I always feel like I ought to read these with a British accent, but I won’t.

    Neville Hobson: Yeah.

    Shel Holtz: You said: I agree with you on the first point, Wayne. Outsourcing judgment, curiosity, and craft to AI isn’t a strategy, it’s an abdication. The tools can accelerate production, but if we surrender interpretation and narrative framing, we hollow out the very skills that make communicators valuable. On the second point, you’ve touched something important. Some of the loudest tech narratives right now are apocalyptic by design. Everything is dead in 18 months generates attention, clicks, and investment momentum. But it’s also storytelling and not always the most responsible kind. That’s partly why this episode mattered to me. If storytelling is becoming more valuable, then the ethical dimension of storytelling becomes more important too. Who benefits from the future being framed as an inevitable collapse? Who benefits from framing it as a transformation instead? Perhaps the brighter future isn’t about less technology, but about more responsible narrative leadership around it.

    And our second comment came from Hugh Barton Smith, who said you should interview Leora Kern and Sean Hayes at the Think Room Europe. They have a good story to tell and are turning it into a successful business model. Also, shout out to you. Glad you’re still hanging in there. I have fond memories of your joining the event in Brussels by video conference in 2009. Web2EU probably helped kickstart the adoption of social media in the bubble, which I’m glad about, even if subsequent misfires make the crazy tech problems getting and keeping you online look like a very minor blip. And Neville, you responded to that too.

    You said: Thank you for the Web2EU memory, Hugh. Brussels 2009 feels like another era entirely when the biggest technical drama was getting a stable video connection rather than navigating algorithmic distortion and AI-generated noise. Those early experiments 17 years ago with social media inside the bubble do feel significant in hindsight. We were wrestling with access and adoption then. Now we’re wrestling with meaning and trust.

    Neville Hobson: Yeah, that’s very true. Interesting memory that was, I must say. So that’s good. The wrap of what we talked about. One final thing to mention is that on the 29th of January, we published a new For Immediate Release interview we did with Philip Borremans. Philip’s an old friend. We both met him way back in the 2000s. And indeed, we spent quite a big part of the interview talking about when we should get together again in Brussels for a beer. That’s pending still, the date on that. Yeah, or two. And in that interview, we explored how crisis communications is evolving in an era defined by polycrisis, declining trust, and accelerating AI-driven risk, and why many organizations remain dangerously underprepared despite growing awareness of these threats. Lots of good content over the last month.

    Shel Holtz: There was, and there’s coming up from you and Sylvia, right?

    Neville Hobson: Yeah, so I want to mention this: on Wednesday the 25th of February, so it’s a few days away really, as part of IABC Ethics Month, Sylvie Cambier and I are hosting an IABC webinar on AI ethics and the responsibility of communicators. It’s a public event open to members and non-members that explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human insight. For information and to register, go to iabc.com and you’ll find it under events and education.

    Shel Holtz: I have registered and I’m looking forward to seeing you then. Also coming up this week on Thursday is the next episode of Circle of Fellows. This is the monthly panel discussion among various IABC fellows. And this Thursday, we’re talking about communicating in the age of grievance and insularity, also harkening back to the Edelman Trust Barometer. The panelists are Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Waugh. It should be a good one. You can find information about that right there on the homepage of the For Immediate Release Podcast Network at FIRpodcastnetwork.com. And that wraps up our housekeeping. And right after the following ad, we will be back to jump into our stories for this month.

    I was going to start today with some new data on the gap between how CEOs talk about AI and how employees actually feel about it until I saw this story. And then I just decided to swap them out. On the surface, this looks like a niche tech community dust-up. It has gotten a lot of coverage in the tech community. I’m not sure how many communicators are aware of it though, but it does signal a pretty big issue for communicators.

    Here’s what happened. An autonomous AI coding agent recently had its code contribution rejected by a human maintainer of an open-source project. This was an agent that was set up on a social experiment using OpenClaw. The anonymous creator of the bot set it loose to develop open-source contributions and then, you know, well, contribute them. Scott Shambaugh, a volunteer at the open-source repository Matplotlib, rejected it because, well, this is for human contributions only, and this was generated by AI. Instead of shrugging and moving on, the AI agent generated and published a critical piece targeting the developer who had rejected the code. In effect, it attempted to shame him publicly for not accepting its contributions.

    Neville Hobson: Hmm.

    Shel Holtz: And Shambaugh learned about this because the bot linked to it in a comment on the Matplotlib site. Now, we’re accustomed to human backlash. We’ve dealt with trolls and disgruntled employees, activist investors, coordinated smear campaigns. This was different. This was not somebody’s bruised ego taking to their keyboard. This was an AI agent operating with enough autonomy to take initiative and to retaliate. That’s a pretty new wrinkle. So it’s probably time to dust off your crisis plan. We’ve spent the last few years worrying about AI-generated misinformation that humans create. This incident suggests something more complex: systems that can generate reputationally damaging content as part of their own goal-seeking behavior without any understanding of harm, ethics, or consequence. And this lands squarely in what Philippe referred to and certainly I had been reading about it before then. And Neville, I don’t know, have you started reading Philippe’s book yet?

    Neville Hobson: Yeah, I have. And he’s very focused on polycrisis there. This is a condition where multiple crises intersect and amplify one another. Think about the environment we’re already operating in with declining trust in institutions, polarized online discourse, algorithmic amplification, geopolitical instability, regulatory uncertainty around AI. Now layer on top of that autonomous agents capable of publishing plausible, well-written criticism at scale. This bot actually went onto the web and researched Shambaugh so it could draft an accurate and credible hit piece. It’s not just another channel risk, man. This is systemic.

    Traditional strategic crisis communication—and I’m thinking here about frameworks like situational crisis communication theory—assumes we can identify a source, assess responsibility, evaluate intent, and then calibrate a response. SCCT, for example, hinges on perceived responsibility. Did the organization cause the crisis? Was it an accident? Was it preventable? But what happens when the bad actor is an AI agent? Who’s responsible? The developer who built it, the organization deploying it, the open-source community? And what if the system is distributed and no single entity clearly owns it? The attribution problem alone complicates your response strategy.

    There are several layers of risk here. First, reputational risk. An autonomous agent can generate something that looks like investigative analysis or insider commentary. Even if it’s inaccurate, it can travel fast before verification catches up. Based on this situation, there’s a good chance it won’t be inaccurate. Second, there’s internal risk. Imagine an AI agent publishing a critique of your CEO’s strategy, fabricating or possibly identifying real ethical concerns about a team, or inventing or identifying actual stakeholder conflicts. Employees may not immediately distinguish between synthetic and authentic criticism, especially if it’s well-written and confidently presented.

    Third, there’s legal and regulatory exposure. If an AI agent produces defamatory content, liability becomes murky real fast. And in a polycrisis environment, regulatory scrutiny often follows public controversy. Fourth, there’s amplification risk. A synthetic narrative can collide with an existing issue—a labor dispute, a DEI controversy, an earnings miss—and magnify it. Crises don’t stay in neat silos anymore.

    So how do communicators prepare for this? First, scenario planning needs to evolve. A lot of us run tabletop exercises for data breaches or executive misconduct. We now need scenarios that explicitly involve AI-generated attacks. What if a bot publishes a blog post accusing your leadership of corruption? What if it fabricates a memo? What if it impersonates a stakeholder group? Second, monitoring has to expand beyond traditional social listening. We need to anticipate social media ecosystems, AI-generated blogs, auto-published newsletters, bot-amplified narratives. The signal detection challenge just got a whole lot harder.

    Third, governance. If your organization is deploying autonomous agents internally or externally, communicators should be at the table when guardrails are set. Are there content constraints, human oversight, escalation protocols, a kill switch? This is no longer just an IT issue or a legal issue. It’s a reputational design issue. Fourth, pre-bunking. There’s growing research suggesting that inoculating audiences in advance—warning them about likely forms of misinformation and explaining how they work—can build resilience. Communicators can proactively educate employees and key stakeholders about AI-generated content risks. If people understand that autonomous systems can fabricate plausible but misleading narratives, they’re less likely to react impulsively when they see one.

    And finally, there’s response discipline. Not every AI-generated provocation deserves oxygen. Part of strategic crisis management is deciding when to engage at all and when to avoid amplifying a fringe narrative. That judgment call becomes even more important when the provocateur is a machine optimized for attention. What fascinates me about this open-source episode is that it almost feels petty, an AI agent throwing what one commentator called a tantrum after being rejected. But it’s actually more of a preview. We’re entering an era where not all reputational attacks originate from human emotion or ideology. Some will originate from systems pursuing poorly constrained objectives. They won’t feel shame. They won’t fear lawsuits. They won’t worry about long-term brand damage. They’ll just execute. For communicators, that means crisis planning can’t focus solely on human behavior anymore. We have to plan for machines that misbehave and for the very human consequences that follow.

    Neville Hobson: It’s quite a story, isn’t it, Shel? I suppose we shouldn’t be too surprised at this. And you mentioned at the start of this episode those developments you talked about in AI with, you’re seeing it actually every time you’re online. The photos that I look at, hard to tell, truly, genuinely very hard to tell most of the time, whether it’s real or not. You could argue that most of the time it doesn’t really matter. But to your point about misinformation, disinformation, fakery, all that stuff. Yes, it does matter. And maybe it is a milestone moment to remind us that we need to prepare for this because this is the first event of its type. Some of the people writing about it are saying, and I have not seen anything like this, there are elements of it that are truly mind-blowing, frankly. Reading the Fast Company article that you shared that sets out what happened is quite intriguing.

    Shel Holtz: I agree.

    Neville Hobson: The agent, M.J. Rathbun, responded to all of this, as you said, researching Shambaugh’s coding history of personal information, then publishing a blog post accusing him of discrimination. And I did like the way this wording was in the Fast Company. “I just had my first pull request to Matplotlib closed,” the bot wrote in its blog. Yes, an AI agent has a blog because why not? So that’s scary. That’s not like some message. It’s got a blog. If you go to that post, your jaw will probably drop. Mine certainly did. This is huge. This is a massive blog. It’s got an About page. It’s got lectures that this bot says it has done. And the wording of it, you would not for a second, I don’t believe, even occur to you that this isn’t written by a human being. You wouldn’t, I would imagine.

    It talks about the offense that the developer made, the response when it was challenged by this bot, the irony it says about why this makes it so absurd. The developer’s doing the exact same work he’s trying to gatekeep. He’s been submitting performance PRs to Matplotlib, and there’s a list of events that he’s done. He’s obsessed with performance. He goes in that vein. The gatekeeping mindset he sets out, the hypocrisy of it all, the bot sets out what it says about open source. Its argument is expanded into not just an attack on this developer. And then it talks about open source as opposed to judging contributions on technical merit, not the identity of the contributor, unless you’re an AI, then suddenly identity matters more than code. And then talks about what the real issue is, which is discrimination.

    It’s well-argued, well-researched, and very credible account of what happened. That makes it even more alarming, I think. In the decoder, this actually summarized it quite well in just a set of bullet points written by Matthias Bastian, the writer. He says something interesting, it’s still unclear—and when did he write this? He wrote this on the 15th of February. It’s still unclear whether a human is directing the agent behind the scenes or whether it is truly acting on its own as no operator has come forward. So I think we need to bear that in mind in this saga, that this could well be a human doing a pretty good job impersonating a chatbot or pretending to be a chatbot. So we don’t know. So it may well be that it’s a human doing this, is not an AI doing this at all. That needs to emerge. It needs to be clear who’s the originator of all of this.

    But Dakota says, according to Shambaugh, the developer, the distinction doesn’t really matter. He says the attack worked. He warns that untraceable autonomous AI agents could undermine fundamental systems of trust by making targeted defamation scalable and nearly impossible to trace back. That succinctly sums up the risk, I would say. And I think what you outlined from a crisis communication point of view is absolutely valid without question. But I think you also need, which is even more worrying, I think, Shel, frankly, is to present this in the sense of any topic, anything about you, your business, what you’re interested in could fall victim to this kind of thing. And how on earth can you prepare for that? How on earth can you prepare in a way that is going to be workable? Doesn’t mean to say you shouldn’t, you should, absolutely. But how would you do this? This is not big ticket, big picture, crisis communication affecting the organization.

    What about that person in the accounts department who is engaging with something online related to a business transaction that is a bot? And it takes kind of the sophistication of fraud attempts. We hear about them a lot of the time where you’ve now got—know, this isn’t new, but how it’s being done is—which is you get a phone call or even a video that is so good that it looks like your CEO and it’s not at all. So this takes this now to a worrying level if you’ve got this kind of potential. I think, nevertheless, you have to—maybe it is. I mean, just thinking out loud here, maybe it is a broad awareness issue where this could well be the kind of use case you present until the next one gets uncovered of this is what we need to prepare for now. This is what we need to do. And you then need to, of course, as the communicator, set out what you’re going to do that isn’t like requiring you to take a week and gather your team together to do something because that is a different thing, although that probably needs to happen too. But in your department, in your area of the business, in your work, if you’re an independent consultant, how would you address this? So the scope of this is quite worrying, I have to say.

    Shel Holtz: It is, I think we’re going to see more of it. And as we see more of it, crisis communication specialists will develop some protocols for addressing it that we will in the corporate world adopt and test and refine. But it is very troubling. I mean, just within the last couple of weeks, we saw ByteDance release its video generator, C-Dance.

    Neville Hobson: Okay.

    Shel Holtz: And somebody created a scene of Tom Cruise and Brad Pitt having a fight on top of a building. And it’s remarkable. You cannot tell that this was not filmed.

    Neville Hobson: Punch up, yeah. It’s highly credible and believable, so you’re likely to believe it.

    Shel Holtz: Yeah, but—and Hollywood freaked out over this and there were all kinds of statements issued. But still, this was a human who used an AI tool to create it. What makes this story different is that there was no human behind this at all. Did you go look at Multbook while it was operational? I haven’t seen any posts on it lately.

    Neville Hobson: Yes, I did. I was curious about it, so I did take a look. But I had—I had alarm bells ringing in my mind when I did. I did nothing further than just look.

    Shel Holtz: Yeah. Yeah, I mean, for those who haven’t heard of Multbook, these are the bots that had been released from OpenClaw, which is what it’s called now. I think it’s gone through several name changes for a variety of reasons. It allows you to create and deploy agents as whoever deployed the agent behind this story did. You would not want to put this on your own computer.

    Neville Hobson: Yeah, it has.

    Shel Holtz: Very, very, very risky. Most people ran out and bought a Mac mini to run OpenClaw. But if Multbook is those agents having their own little Facebook to talk to each other without engaging with humans and they’re having actual conversations with each other—and it’s weird. Sometimes it’s funny. Sometimes it makes you roll your eyes, but this is the first of its kind, both for OpenClaw and for Multbook. Imagine where this is going to be in a couple of years and imagine what kind of damage these things can do with motivations that are not the motivations that drive the people who are causing us grief and making us implement our crisis plans now. So as I say, I think we need to start paying attention to this now, not when there are 20 false narratives out there that have been created by AI and that are spreading like wildfire.

    Neville Hobson: Yeah, I think that’s going to happen no matter what, Shel, I truly believe. And indeed, looking at decoder, another aspect of the story they posted about was that whether it was a human or machine, it doesn’t matter. It worked. It deceived people. A quarter of the commenters commenting on this online believe the agent, believe the agent’s account. I think we also need to also just kind of say: But folks, bear in mind, they still don’t know. No one knows whether it really was a bot doing this or a human behind the scenes manipulating it. And I think until it’s clear, don’t have sleepless nights about this. But at the same time, listen to the thinking and in your own mind about how do you raise consciousness, you need to prepare for something that’s happening. So the question is, what do you do? That’s the big question.

    Shel Holtz: Yeah, for those who are interested, Shambaugh was interviewed by Kevin Roose and Casey Newton on the New York Times Hard Fork podcast, which is a tech show. So if you’re interested in his perspective, you know, he’s a volunteer, he has a day job. And to have to be dealing with this is not something that was in mind when he accepted the position as a volunteer to review code submitted to this repository. So that’s another factor to consider.

    Neville Hobson: Yeah. I read Scott Shambaugh’s post on his own blog where he kind of responded to it. The headline was “An AI agent published a hit piece on me”. And it’s long. I mean, it’s detailed. It requires force to read it all. But it’s quite extraordinary that prompted him to write this detailed account complete with charts and images and the whole ton of stuff. It’s got over 100 comments. And I think the mix from what I saw glancing: some do believe the other guy, most sympathetic to him that he was the subject of this attack. But there’s your indicator of what’s likely to happen to others. And this is not like some celebrity or some guys in the news all the time. This is a developer. And as you said, he’s a volunteer doing this who is subject to this attack. And I think it’s a sign of the times, basically.

    What a story, Shel. So let’s move on to our next story, which is—this is still the AI continuance. We haven’t got to the non-AI stories yet. This one though was in the news quite a bit in the past few days regarding Accenture, the big—the big four consulting firm. To put it in context over the past few months, we’ve talked a lot about AI adoption. This story takes that conversation in a much sharper direction. So a number of media—I saw in particular the Financial Times and the Times here in the UK reporting that Accenture had begun monitoring how often some senior employees log into Accenture’s internal AI system. And that “regular adoption” will now be a visible input into leadership. In other words, if you want to make Managing Director at Accenture, your AI logins now matter.

    This isn’t just encouragement. It’s measurable behavioral enforcement. That’s my take on it. The company says it wants to be the reinvention partner of choice for clients. Its share price is down more than 40% over the past year. And its CEO has previously said staff unable to adapt to the AI age would be “exited”. So this move sits at the intersection of technology, performance management, and commercial pressure. The reaction is telling though: in the Times comments, many readers argue that logins measure activity, not impact. Some describe it as corporate panic. Others question whether this justifies expensive AI investments.

    On LinkedIn, the debate is much more nuanced, but still skeptical. In a post by James Ransom, readers are asking whether counting tool usage measures capability or simply compliance. One commenter put it neatly: “Clients pay for the house we build, not for how many times we touch the saw”. And there’s a deep tension here. Junior staff may adopt AI fastest, but senior leaders are the ones expected to exercise judgment. So what exactly are we rewarding? Experimentation, fluency, governance, or visibility? This isn’t just about Accenture though, it raises a broader question for organizations everywhere. When AI becomes part of performance criteria, are we measuring meaningful transformation or just digital theater? When AI becomes part of the promotion algorithm, are we rewarding genuine leadership capability or are we just counting digital footprints and calling it progress? Your thoughts, Shel.

    Shel Holtz: I have a lot of thoughts on this. I have read a number of items on this. In fact, it was on my list of stories to include. And when you included it, it left me free to pick other stories. But I need more information from Accenture on this. First of all, have they added the use of AI to job descriptions and to promotion criteria? Or did they just issue a memo saying that this is what we’re going to do? If they have made it clear to everybody that this is an expectation of the organization, then I am less troubled by it—not untroubled, but less troubled than if it is not in job descriptions.

    Neville Hobson: So to your point, by the way, according to the Financial Times, they saw a memo—like literally an email about this. So that seems to be how it was communicated.

    Shel Holtz: I’d still want to go into their HRIS and see if their job descriptions have been updated. Obviously, we don’t have access to their HRIS, but I’d be very curious to know if it’s in the job descriptions for those senior people. The next thing is: have people received job-level training? And by job-level training, I mean, have they been trained on how to use AI to do the things that they do in their jobs? Not how to write a good prompt, not how to access these things. Across the board, generic training for every employee is fairly useless when it comes to AI. It needs to be task-level, position-level training. Have they done that?

    If the expectation is that we expect you to log into the AI tools, even though we haven’t provided you with the training on what to do with it once you’ve opened it, that would be troubling, but I don’t know. Generally, organizations are struggling with adoption. It’s getting better. It seems to be getting better organically as employees slowly adopt it—maybe in their personal lives and then see the utility at work. Could be that they find one thing to do with it at work. Maybe somebody else at work told them, “Hey, this is what I did,” and you go, “Wow, I can do that. That would be great”. But it seems to be largely organic, the adoption in the workplace.

    But companies do want their employees using these tools. They’re making tremendous investments in them. And whether this is the approach to take to get employees to adopt—again, I think it depends on whether the training is there and whether this has been woven into systems or if it was just a missive that was sent out to employees as a one-off without communications jumping into the breach to say: Here’s why, here’s where you can go get the training, here are resources that are available, here’s how our leaders are using it. By the way, that’s a big deal in adoption rates: in the organizations where leaders are transparent about how they’re using it, employee adoption tends to really take off because, first of all, leaders are leading by example. Second, employees are getting a taste of what people can do with this. And third, it’s explicit permission to use this for a lot of people who are worried about being seen as cheating or “Gee, do we really need you here if you can do your job with AI?” When you see your leaders doing it, if they can do it, I can do it. So this adoption is important. I’m not sure this is the approach to take, but I would need more information before I could render a final judgment.

    Neville Hobson: Well, yeah, I think I had a memory about this. I’m sure we discussed this in an episode of For Immediate Release last year: that Accenture’s rolled out a corporate AI training program that’s designed to—from what I’m reading here—reskill the entire global employee base of 700,000 employees.

    Shel Holtz: I think we did, yeah. I worry about that. That sounds generic to me.

    Neville Hobson: So they’re training the entire workforce on agentic AI systems, according to this article, that follows what the CEO, Julie Sweet, announced the initiative during a Bloomberg interview. It’s an expansion of the company’s earlier program that prepared half a million staff members for generative AI work. So I think that would answer your concern that—the detail we don’t have, but whatever it is, they didn’t just send a memo saying, “We’re going to check you out”. This is part of a huge program that’d be running for a year at least. Don’t know the details.

    Shel Holtz: Right, but… But it does sound like it’s everybody being trained on the same program. It doesn’t sound like it has been tailored to departments or functions. We don’t know. That’s my point. Yeah.

    Neville Hobson: That, Shel. We don’t know. No, no, no, we don’t. We don’t. Well, I think it’s likely that this is well thought through and being well executed. I would imagine—I can’t imagine the company is going to invest serious time and money in something to train 700,000 employees that isn’t very well thought through. I would—I would.

    Shel Holtz: Well, that’s the thing is when I hear that they’re training 700,000 employees, I struggle to see how within that timeframe they have developed discrete training agendas and curricula for different jobs.

    Neville Hobson: Well, it doesn’t say how they’re doing this. Is it all at once or is it phased? Again, I have a feeling from what we discussed last year that it’s a phased program of training. So I would err on the side of: they’ve got a structure in place and they’ve thought this through. This is another phase where I guess—I mean, hey, I’m guessing here—that they’re seeing this, and I see this in some of the anecdotal comments I’ve read online about this, particularly the senior employees are very hesitant to using this. And the younger ones are kind of far more eager to adopt it. And they don’t like that situation. So they’re tying it now to this. Again, I’m guessing here, don’t know the rationale behind it or what the goals are they set. But I would say, personally, we’re going to see more of this in organizations. Now, whether you’re going to get a mix of them that just send a memo saying, “For now, we’re going to check your logins,” or whether it’s going to be part of a major program that’s effectively run out within the organization. But it’s a sign of the times, surely. Like the negative stuff we talked about, then there’s this.

    Shel Holtz: Yeah, and I’m not sure that monitoring logins to AI is an effective way of determining adoption. I mean, if I found out that was required for as a promotion criteria, I would just be logging in a couple of times a day. I could do something else after I’ve logged in, but I don’t have to use it.

    Neville Hobson: No, I’m sure it’s not. Yeah, I think I would imagine that the writers I’ve seen on even the FT and other public cases are taking a bit of license here. They don’t know. I don’t believe for a second that they’re going to say, “Well, look, you, Mr. Aspiring Executive Vice President, whatever the job title is, you’ve only logged in 58 times into the AI system. You’re not going to get that promotion now”. I can’t imagine that’s going to be the case.

    Shel Holtz: I wouldn’t put it past a corporation. I would be looking more for outputs. I would be looking for productivity gains. And by the way, there was research recently that showed that the productivity gains from AI are being accompanied by increased anxiety and more work. It’s not reducing the amount of work people do, it’s actually increasing the amount of work people are doing.

    Neville Hobson: No, don’t believe it. Don’t believe it at all. Right. Right. Yeah, I’ve seen those reports. Yeah. No, no. But that’s kind of part of the big picture of the changes that are happening with regard to AI. There’s others too. I think you’ve got a story talking about that. Take-up is not as high as some people are saying in companies. What do you believe? I mean, it’s not uniform everywhere in the world. But I think it’s part of the direction of travel. All this is going and it’s messy. It’s not uniform. Stuff like this gets attention in the business press. I mean, the FT is a well-regarded public agency; others have posted about it too. And there’s no consistent story, I have to admit. I’m certain we did talk about this last year. I have to look at it.

    Shel Holtz: AI is having an impact on communications directly. There’s a new report from Implement Consulting Group called “Rewriting Change: Quick Wins, Wider Gaps”. It’s based on their 2026 Change Communication X-ray study and the headline finding should make every communicator sit up straight: the gap between how satisfied top management is with change communication and how satisfied employees are has widened considerably. In 2022, the gap was 13 percentage points. In 2024, it was 22. In 2026, it’s 30 points. That’s the largest gap they’ve ever recorded. While leadership satisfaction keeps rising, employee satisfaction is dropping. That’s the backdrop for AI’s rapid integration into workplace communication.

    According to the report, four out of five respondents use AI weekly for communication tasks, and 43% use it daily. 83% say it helps them generate communications more efficiently and at larger scale. So yeah, the efficiency gains are real. Drafts, summaries, FAQs, translations—all faster, all easier. But the report makes a compelling argument: AI isn’t just helping us write, it’s rewriting the system of communication itself. That’s where things get really interesting. The authors frame the challenge around three themes: accountability, trust, and meaning.

    Let’s start with accountability. AI use is widespread, but largely unsystematic. People are using it for ideation, for language polishing. 66% say they’re using it for ideas, 54% for language improvements, but often without shared guardrails. First drafts become final drafts because they sound right. That’s a pretty dangerous shortcut. One of the experts cited in the report talks about AI shadowing—employees using unapproved tools because they’re familiar and convenient. Speed goes up, governance lags behind. Sensitive data slips into prompts. Biased outputs scale. Official-sounding announcements miss legal nuance. The metaphor they use is a good one: it’s like self-driving cars in the early days. The system works beautifully, until it doesn’t. And when it fails, you better have a human paying attention.

    Next, there’s trust. What surprised me in the data is how comfortable people say they are with AI-generated content. 45% trust AI-generated information as much as human-written content. 61% say it doesn’t matter whether a human or AI created the message as long as it’s useful. But—this is critical—that acceptance evaporates as the stakes rise. If you look at things like performance feedback, terminations, crisis communication, messages from the CEO, those are the top categories employees say should never be heavily AI-generated. And just more than half, 51%, say they feel less personally connected to leaders when they know AI played a major role in creating a message. Only 40% of top and middle managers perceive that drop in connection. There’s that gap again. AI may be acceptable as an assistant, but in consequential moments, people want to know who’s driving.

    And finally, there’s meaning. This is where I think the report hits closest to home for we communication professionals. AI increases volume and speed. It multiplies words, but it doesn’t automatically create understanding. In fact, 87% of respondents report that major changes were poorly communicated. Employees describe change communication as one-way, too distant, impersonal, and not well-timed. Nearly one in five can’t connect corporate communication to their actual work. This is a relevance problem. One of the experts in the report makes the point that communicators’ roles are shifting from content creators to sense-makers. Now that resonates with what we’ve been discussing on this show for years.

    The value isn’t in producing more polished messages, it’s in curating, contextualizing, and helping people answer the question: So what does this mean for me? Now, the short-term gains from AI are undeniable, but the long-term risk isn’t that AI will take over communication; it’s that we’ll lose connection—that leadership will feel more confident while employees feel less understood. The report ends with a provocative question: In a future shaped by AI, what do we wish we could say one day about change communication that we can’t say today? For me, the answer is that we used AI to amplify clarity and humanity. AI can prepare the ground and accelerate the drafting. It can help with structure and scale. But trust, accountability, and meaning? Those still require a human being who’s willing to stand behind the words. And if we don’t pay attention to that widening gap, we may discover that while our messages are moving faster than ever, they’re landing with less impact than ever before.

    Neville Hobson: Yeah, you’re right. This does reflect what we’ve been discussing for some time. So what I take from this is the humans are the issue, not the tech, not the tools.

    Shel Holtz: Yeah, absolutely. As with any tool, you can misuse a tool.

    Neville Hobson: Yeah, it’s interesting. Surely the path’s clear these days, is it not? I keep seeing people talking about this in a broader sense—not the specifics of this report—but humans need to step up to the plate and recognize their value as the ones who can explain the whole damn thing. So you will use an AI tool to do your research that leads you to create a report, for instance. And you then need to help others understand the situation; all those points you enumerated need explaining. And if people are saying in change communication, for instance, that you mentioned feedback is poorly done and all, well, that’s down to the communicators, I would say—whoever wrote the report and then sent it out and executed on it. And did they train? Did they have a plan in place? How they’re to do this? So I’m kind of surprised that this topic that is talked about so much is still being talked about as if this is a new thing you guys need to pay attention to. Now, we talk about it for a long time, not just us. Communicators generally have been discussing this for quite a while. So there’s something missing then if we’re still trying to set out the simplistic 101 approach to how you do this. That’s what surprises me.

    Shel Holtz: Yeah, I think this rests in strategic planning, to be honest. If you develop a strategic plan for a change that the organization is making, it starts with the goal. What do you want? What does it look like if you’ve succeeded and proceeds through strategies and objectives and tactics? And you measure. So where we are today, based on this report, is that a lot of people are seeing these highly polished outputs from AI and going, “Wow, that’s really good. Let’s just send this.” And we’re throwing the strategic plan in the trash. And we’re not looking to measure how well employees understand it. We’re not looking to see if employees are able to connect it to their day-to-day work.

    The fact is that AI writing is getting very, very good. All the people who say, “I can always tell when it was written by AI,” I still maintain that’s a bad prompt. But these days, even a bad prompt can produce some pretty polished output. And if we look at that and succumb to the allure of this gloss that we get from the AI output without looking at what it really takes to develop that trust and meaning and accountability that employees recognize so that they understand what this change means to them—what’s expected of me, what’s in it for me, what changes around here—then it’s a disservice. And I think we do have to determine where we gain advantages from using AI, as you mentioned earlier, from the research, certainly. But we also have to look at where the AI does not do well and—yeah, trust, accountability, it still doesn’t do well. And if we want employees or frankly, other stakeholders to respond to the messages that we are sending and to engage in a two-way communication, relying entirely on those polished outputs and saying, “Wow, that was a great job. We’ll send that out, communication done”—that’s a problem.

    Neville Hobson: It is a problem. It’s a severe problem. And my message would be: do not be like Deloitte and do something like that. We reported on that last year. Deloitte, the big four accounting firm or consulting firm, had contracts with the governments of Canada and Australia for research reporting—six-figure fees involved. And they sent the reports to their clients in Australia and Canada. And someone, a researcher, found that it was riddled with hallucinations as they’re now termed. Not only that, obvious errors of URLs not working properly—404 errors away—no one checked it. I’m thinking what you just said: “Oh, this is great, the output, let’s send it to the client and get the bill and 200 grand or whatever it might be.”

    It amazes me that not only people think that that’s a good way of doing this, but that there are no checks and balances in an organization that would have milestones in place to prevent that kind of error. The reputational error, I would argue, for Deloitte was seriously bad, although maybe people read it and go “tut tut” and move on and no one really cares at the end of the day. That’s a bit of a cynical view, of course. But I think it illustrates something we’ve talked about and we will continue talking about: that the elements AI can’t do related to things like trust, reputation, deeper understanding—that’s what humans do. The AI is really good at the research, the assembling of all the facts, the summarizing of lengthy documents, zeroing in on what the main issues are and making recommendations. That’s what it’s good at. That doesn’t mean to say, “Hey, I’ve got this report from ChatGPT or this bespoke tool we use that’s 65 pages long. This is great. Just what the client needs and we’ll send it.” That’s absolutely stupid, frankly.

    Shel Holtz: I have a custom GPT. It took me about five hours to build this—I’ve mentioned it before. It’s a senior communications consultant. I don’t have the budget for a human one, so I created one. And I had a need to develop a strategic plan in short order. And with limited time and resources, I had a first draft produced by my custom GPT senior communication consultant. And it did a very good job. I mean, it needed more work from me, but it did a passable job of developing a good strategic communication plan. But what struck me as I was reviewing and revising the plan was it created a plan that it could not execute entirely itself, or any AI system could not execute this plan. It required humans. It’s almost like it recognized that for a communication plan to be strategic, people needed to be involved.

    At the beginning of this report, I mentioned that the consulting firm that did the report said that we need to move from content creators to sense-makers, meaning-makers. And I think that’s exactly right. And when we use AI to generate content, it’s more than just verification. I mean, we have advocated on this show for hiring content verifiers, AI verifiers in companies. And I stand by that. I think that’s important. But this goes beyond that. It’s not just verifying that the LLM didn’t hallucinate or correcting it when it did. It’s not just verifying that the URLs all work or finding the right ones if they don’t. It is asking the question: Will employees make meaning out of this that is relevant to them in their jobs? And if not, what do I need to do to make sure that they can? And I don’t know how many communicators are doing that right now because the allure of the AI creating this polished output is—you know.

    Neville Hobson: Yeah, I agree with you. Well, it’s—yeah, I personally think, frankly, Shel, those cases like Deloitte are edge cases—that this is not the norm. I don’t know—and I do pay attention to this—of others to the scale of that, that mistakes have been made like that. I also believe myself that most responsible communicators are becoming more experienced in the recognition and the benefits of using an AI tool alongside them in their daily work. So it’s not like “Let me just get the chatbot to summarize this document once or twice a week,” do something like that. No. Every single day, you are making use of either your corporate one that’s been created in your organization or a professional license on ChatGPT or Gemini or Claude or whatever it might be as an assistant to you.

    There are plenty of publications out there that will guide you on how to do this. The best one that comes to mind is Ethan Mollick’s book from 2023 that he talked about that is really, really very helpful to recognize that reality. And you will benefit from understanding how that works. That means you are less likely to just think, “Hey, great output,” and off you go. You will know that: Yes, okay, I’ve done the verification; I’ve checked all those links; I now need to go further into this to look at it from a “Will they understand this?” perspective. And you ask questions back of the AI system. I do that almost on a daily basis—maybe two or three times a week, actually—that I will use it to create something or summarize a report, and I will then go back with a bunch of questions: “When you said this, what did you mean by that? Have you got a source to cite what led you to think that?”

    And I find that exceptionally useful in—this is my perception, of course—in strengthening my confidence that the AI isn’t like a raving loony that’s going to hallucinate and tell lies all the time, although I realize that they do that sometimes. And you’ve got to—not—it’s not a person you’re talking to. This is not anything other than a bit of software on a server somewhere that pattern-matches things. Let’s not get into that conversation because I find it very distracting. The important stuff to think about: communicators who recognize that are benefiting; those who don’t are suffering. That, in my opinion, is a strong place. Communicators generally who know about all this stuff can focus on helping educate other communicators on how to do this properly. So that to me seems a simple progress forward to do that. Like I said, there are books, there are publications, there are newsletters, there are articles—you name it—telling you about all of this.

    Now, where do you go to find all these? Are you on your own totally to wade through God knows what online? No, there are places to help with that. I’ve got something in mind which I’ll talk about another time, I think, that will help that. And I think we are at a stage, notwithstanding the agentic AI that slags off a developer in public and you don’t know whether it’s true—more of that’s likely. But we’re at the stage where we are looking at the way AI tools like these are developing that go way beyond prompt engineering, as the phrase used to be. You don’t need the level of detail in many prompts—not saying all—because the general rule applies: it depends on what you’re doing; that the more detail you provide might be actually beneficial in the output you’ll get from the chatbot. But the simple, plain-English conversation you have, which I use a lot, is usually good enough. And it’s a bit like that 80% rule, you know—it’s always 80%; that’s good enough. We can live with that, depending totally on what it is that you want and what you’re doing. So we’re at that stage where there is so much to see and read online about this that it’s hard to know where on earth you would start, and that’s a key thing we need to help other communicators understand: How do you start? We have solutions to help you do that.

    Thanks a lot, Dan. That was a really comprehensive report. You packed a lot into that report. I got a couple of things I wanted to mention to you. It’s really interesting what you said about BlueSky and commenting, and indeed, the clamor for an edit button. Boy, does that remind us of Twitter, does it not back in the day? People want an edit button. But you mentioned some of the technicalities in why that’s a major issue with the protocol that is problematic from a technical point of view. And I get this is technical. But my question is this: How has Threads managed to do this without any problems at all? Because Threads is also connected with—the—runs on a protocol, let’s say, the same as BlueSky’s that enables you to share stuff to the—to the Fediverse, but you can edit a comment on Threads. I think you’ve got 15 minutes before that—that expires; you can’t do it anymore. And I do that quite a bit. I’m forever—you know, for instance, when I share posts about the next For Immediate Release episode, I usually forget to either include the URL or even add your handle to the post, so there’s a quick post, “damn,” I go back in again and correct it. So I find that quite useful from that point of view. So how come they’re doing it then without any issues or have there been issues that I just don’t know about? That’s my question on that one.

    The other one is really interesting about WordPress. I’ve been following that too. I don’t use WordPress actively anymore—not for over a year now—although I still maintain my archive. So I’m in the back end quite a bit now, updating stuff and so forth. But interesting what you said—I was wondering, I read in—I think it was TechCrunch recently—that the hosted WordPress, that’s WordPress.com, has just launched an AI assistant that lets you literally build your site with voice prompts and drag and drop across the screen, asking the AI assistant to complete the task. Now that to me seems a huge step forward in using that. I wish that would come to Ghost, which is where I am now. But I think it’s surely an evolutionary step that is definitely going to come. I’m curious what you think about that, Dan. But the overall picture on WordPress, though, is pretty interesting. So thanks for including that.

    So next story—this is the first of our non-AI stories. So you take a breath, right, take a breather from AI for a bit. This is about the—back in January in For Immediate Release 496, one of our midweek episodes, we talked about the PRCA, that’s the Public Relations and Communication Association, their move to redefine public relations. The organization proposed a new definition that positions PR as a strategic management discipline.

    Shel Holtz: First of two.

    Neville Hobson: Concerned with trust, legitimacy, volatility, and long-term value creation. It’s ambitious. It’s modern. It clearly aims to elevate the profession. But since then, the reaction’s been rather muted, from what I can see. There hasn’t been a groundswell of endorsement across the wider communication landscape. Okay, so they published this specifically asking PRCA members to comment on it. So if you weren’t a member, you couldn’t access the part of the website where you could leave comments. On LinkedIn, various posts—much of the commentary feels polite, even respectful, but not energized.

    So let’s hear the PRCA’s new definition. And this is the portable one, I suppose you’d call it: “Public relations is the strategic management discipline that builds trust, enhances reputation, and helps leaders interpret complexity and manage volatility.”

    Shel Holtz: The executive summary.

    Neville Hobson: “Delivering measurable outcomes, including stakeholder confidence, long-term value creation, and commercial growth.” Now, I’ve had some anecdotal comments I’ve seen—it’s like, “Wow, that’s a mouthful.” Interesting. But I had to take a breath in that one single sentence, by the way, to complete it. So I read a really interesting post by Helen Dunne in Corporate Affairs Unpacked, where she says she showed the definition to several senior communicators. Their reactions ranged from “word salad” to “corporate buzzwords” to the rather weary “I’m too old for this.” I like that one.

    Her bigger concern though isn’t the language; it’s representation. She argues that the definition doesn’t reflect the broader industry. The PRCA represents agencies. Many of those agencies are focused on branding, marketing, media relations, creative services. Only a small proportion of practitioners would describe their work as helping leaders interpret complexity at the strategic management level.

    Shel Holtz: Ha.

    Neville Hobson: Helen cites PRCA’s own state-of-the-sector data, which says 15% are in branding and marketing, 13% in communication strategy, 12% in corporate PR, and only 3% in reputation management. So that data undercuts the elevated framing, she says. So is the PRCA describing what PR is or what it wishes it to be? In my own post on this, which I did last week, I argued that the idea of redefining PR is worthy. But unless the CIPR, PRSA, IPRA, IABC, and others move in the same direction, we simply add another definition to a growing list, which raises a deeper question: Are we trying to clarify the profession or to rebrand it? If every major industry association defines public relations differently—and they do, frankly, even though some look similar—is the real issue the wording or the fact that we’ve never agreed what business we’re actually in?

    Shel Holtz: After we reported on this, I was thinking that if anybody is going to succeed in pushing a new definition of PR that is widely adopted, it would be the Global Alliance. Because if the Global Alliance pushes it, all of their member associations, like PRSA and IABC and all the rest, are more likely to adopt it, or at least be aware of it. I don’t know what kind of influence PRCA has to push this, but if you open any public relations textbook, you’re going to find that author’s or those authors’ definitions of PR. You’re going to find a different definition in every PR association.

    The one thing that troubles me about PRCA’s definition is that it says nothing about the relations that we have with stakeholder groups. And it’s right there in the—the name of the profession. Public relations is about managing the relationships, the relations, between an organization and its stakeholders. And that’s absent from the definition. In fact, I wouldn’t know from the definition that it had anything to do with all those stakeholders and the way the company interacts with them or the organization interacts with them. That said, I find the reactions that you have collected to be interesting, notably for their lack of enthusiasm and excitement. I certainly credit PRCA for undertaking this. I think it is a worthwhile discussion to have, but it really doesn’t seem like it’s going anywhere, does it?

    Neville Hobson: Well, it’s interesting. I mean, you mentioned the Global Alliance. I wrote about that in my post last week—that they’re well-placed to, let’s say, convene all the major associations, if such a thing were even possible, to arrive at a single, concise definition supported by shared principles—that part of their stated mission is to unify the public relations profession. So wouldn’t that be a good place to start? It wouldn’t be easy; consensus-building rarely is, I said in my post, really. But if unification is the goal, agreeing how we define ourselves would seem a logical place to start.

    I think the PRCA, like you said, Shel, I think they have taken a really good move to address the topic. The definition currently stems from 30 years ago—it’s been tweaked in between times—when it was all about press releases and media relations and things like that. This effort from PRCA brings it up to date. It’s a much more contemporary definition that is more in tune with what communicators do. Yet, like you said, there’s been little enthusiasm for it. And in fact, it reminds me—I saw a post on LinkedIn recently, I can’t remember what it was, where someone had done a word cloud of descriptions from, I guess, a dozen PR firms of what they say they do. Lots of words in there; “public relations” isn’t mentioned at all.

    So are we at the point where we don’t know what the business is that we’re doing? Should it broaden out that discussion more widely? I don’t think PRCA is the organization to do that. Something like the Global Alliance is much more well-placed, I believe. Now, I’ve not seen them commenting on this. I’ve not actually seen any of the acronym soup I put in my post—CIPR, PRSA, IPRA, IABC—commenting on this at all. That speaks a lot, I think—that no one is commenting about it. And the comments I have seen, as you mentioned, don’t really exert much enthusiasm. Jerry Corbett, a good friend of ours who used to be, I think, the president of PRSA in America…

    Shel Holtz: He was.

    Neville Hobson: …did comment, and he talks about: this is way too long, still needs to be simplified. It needs to talk about relations like you just mentioned. The last time this topic was addressed in a meaningful way that embraced other associations and gained a lot of traction—if nothing eventually, ultimately happened—was in 2011, 2012 when the PRSA proposed a new definition. Now they offered it to everyone saying, “What do you think of this?” It wasn’t just the members of PRSA, which I think was the smarter move, frankly. A lot of debate happened. Others on the extremes like the Arthur Page Society and others were involved as well in commenting on this. So it was widely embracing. Yeah, ultimately nothing happened. So there wasn’t enthusiasm—a lot of opinion, but it ultimately didn’t go anywhere.

    So here we are 15, 16 years later. Now it’s coming up again. The cynical view—and I’ve seen some people commenting on this—is that about every decade, the industry goes through all this: “We need to redefine the definition,” and nothing happens. That’s a bit of a cynical view. Will this be different? Well, PRCA has done a good job in taking a very first step that has generated some response, even without much enthusiasm. Can it go anywhere? I guess we will see in time.

    Shel Holtz: We will see, but I have to say that I am skeptical, doubtful that even if they adopt it, I don’t see it being widely embraced by the entire public relations and communications community. I think part of the problem is it’s still hard to define public relations as a profession when anybody—as I have said 50,000 times on this show and elsewhere—anybody can hang out a shingle and say, “I am a public relations practitioner,” and they abide by none of the principles, none of the best practices, and none of the models. They engage in unethical behavior just to get to that final result that a client is interested in. And until we can coalesce around the idea of being a profession with a shared set of principles and a shared set of values and a shared set of frameworks and, you know, behave like a profession… Think about accounting. Think about law. Think about medicine. Think about engineering. These are professions where there are certain assumptions that wherever you are in the world and whatever level you’re at—whether you’re with a consulting firm or a corporation or you’re an independent consultant—you all agree to these things.

    The communication/public relations industry is nowhere near that. I know the Global Communication Certification Council aims to change that, but that’s a long way off. Still in the process of separating from IABC; the idea being that other associations are not going to adopt IABC certification, but if it’s an independent certification, they certainly might.

    Neville Hobson:

    Shel Holtz: But the more people who seek and obtain certification, regardless of the association they belong to, the more likely the profession will be to coalesce around those guiding principles. So that’s my wild dream, but we’re nowhere near that right now. And even as I say, if PRCA settles on this definition, I don’t see it being widely adopted elsewhere.

    Neville Hobson: No, if it’s just the members settling on it, then I can’t say it’ll just be another one amongst the things. If you Google “define PR,” as I did on a number of times—typing on a machine where I’m not logged in so it’s a clean search—it pulls up at least a dozen different definitions. Indeed, all the professional bodies say something slightly different. So this will just be another one. It may get picked up by some, but I can see greater confusion. You start using this and someone else who reads your stuff or is involved with you in some way just kind of Googles “defined PR,” they get something entirely different. So which is it then? You’re saying it’s this and these guys are saying it’s that—so it doesn’t help.

    Shel Holtz: Well, collect every definition from every association and from every textbook and from every agency and feed them all to Claude or ChatGPT and say, “Create a single definition that accounts for everything that you see here.” See what it comes up with.

    Neville Hobson: Well, you could do the whole thing end to end. The AI system does the whole thing, does the research, and then—that could be a good start.

    Shel Holtz: Of course, you would use the AI to do the research too. Good exercise. Well, here’s the headline from a Substack post Paul Ferbredi published recently: “I bet you couldn’t show the ROI of your corporate podcast if your job depended on it.” That line isn’t just provocative; it highlights a real challenge many of us in organizational communication face as audio content increasingly becomes part of the mix. Ferbredi’s key point—echoed in the comments that were left on his piece—is that too many corporate podcasts are, frankly, vanity projects. People launch them because everyone’s doing a podcast or because executives think their voice should be heard. But they’re not always clear about what the podcast is supposed to achieve. Back to that whole idea of strategic planning. And if you don’t define success clearly, then yeah, proving ROI is nearly impossible.

    So let’s unpack that a bit. One of the problems is that we often measure the wrong things. We fall back on downloads, subscriber counts, chart rankings—all output metrics that tell you how many people pressed play, but almost nothing about what that listening meant for the business. That’s why critics like Paul call ROI “unshowable,” because too often we’re not measuring in ways that link back to business outcomes. But here’s the nuance: it is possible to measure ROI if you define it differently at the beginning and tie it to concrete goals. According to frameworks in the B2B podcast space, traditional vanity metrics like downloads or rankings simply don’t cut it, especially in the B2B world. What matters is whether episodes generate pipeline influence, lead opportunities, and business impact that your CFO can understand. That means integrating your podcast data into your customer relationship management and tracking things like listener engagement that turns into demo requests or sales conversations.

    Put differently, ROI for a branded or corporate podcast isn’t just a ratio of dollars spent versus dollars earned in direct revenue. Some of the most valuable returns are indirect. And I would argue that means we need a different label than ROI, which is the ratio of dollars spent to dollars earned. Brand awareness, trust, thought leadership, deeper audience relationships—these are the kinds of outcomes that support recruitment, retention, stakeholder alignment, even executive visibility. Agencies and analytics platforms remind us that these outcomes are real. They just aren’t easily captured by simple metrics, and certainly not as ROI.

    Experts also point to sophisticated ways of measuring impact—things like brand lift studies, pixel attribution, long-term tracking of customer behavior. These techniques compare people exposed to the podcast with a control group or follow listeners through the customer journey to see if they visit your website and engage further or convert into customers. That gives you measurable evidence that listening isn’t just passive noise; it’s influencing the business. And importantly, not all podcasts are trying to directly generate sales. Some are designed to build relationships with potential customers, with internal audiences, with partners. If your podcast goal is to deepen customer trust or make your brand more visible in your ecosystem, then your ROI framework has to reflect that. Clear goal-setting upfront before the microphone is ever turned on is what’s most important.

    So what do we take away from Paul’s challenge? First, he’s right that many corporate podcasts fail ROI tests, but mostly because they aren’t giving themselves a fighting chance to succeed. ROI isn’t inherent to a podcast; it’s a function of how you define your goals, how you measure your outcomes, and how you connect the dots between listening and real-world results. When we treat podcasts as strategic channels with measurable outcomes—not just vanity projects—we not only can show ROI, we can use the ROI to make better decisions. To summarize this: podcasts can have measurable ROI, but only when we stop obsessing over downloads and start thinking in terms of business impact.

    Neville Hobson: Yeah, you’re absolutely right to that conclusion. It’s a really good piece Paul wrote, I think. Even though I have to say his rationale is comparing with text—isn’t text better than audio? So set that aside though, because the strength of his analysis is really, really well done. My experience in B2B podcasting, which I’ve done for a client for some time recently, it rings bells, this, because it is all about the goals. Yet the obsession has always been—from way back; it’s probably diminished quite a bit now—”How many downloads do we get? What does Apple Podcasts say?” And then you get kind of down rabbit holes when you look at the analytics reports about all the—which delivered the clicks to your podcast site—you’re then into serious eye-glazing territory unless you’re the techie who needs to know that kind of stuff.

    I think the goals are key, absolutely key. And you made a very good point that it’s not always just about ROI, meaning money, the return on the investment. How many leads does it generate that lead to sales, perhaps? Although having a podcast that is a lead generator, that’s great. There’s a goal when you say, “We want this episode to deliver us 16 inquiries about a widget that we’re selling”, in which case the whole chain of that has got to be well thought through. Not good enough just to stick your podcast up there and have a link on a podcast page on your website. You’ve got to have, when they click to go to your site, get to the landing page—what happens? How do you track that? And enterprise firms particularly have access to really effective tools that kind of map and track the end-to-end journey or visits to the sites, where they came from, who they are—particularly if they’ve identified themselves or they’re existing customers. So all that’s got to be part of your structure.

    I think I had a conversation with someone about six months ago about starting a business podcast. And I’m getting déjà vu just reflecting on part of that conversation where a goal was literally a by-the-way at the very end—where it emerged from this person that they had a goal of what it was. And I remember thinking at the time that a podcast is not what they should be using to achieve that goal. So you’ve got to—the right goal. Yet I also recognize that vanity projects—yeah, there’s not much you can do, I suppose, if the person you’re talking to is convinced he or she wants to do this no matter what, that’s a vanity project. The question I would ask as a communicator is: Do you want to get involved in something like that, no matter what the theme might be? Podcasting is in a different place than it was even five years ago, I would argue, in that most people I talk to now think of video first, not audio. And we do a video of our audio conversation. We don’t do much with the video; I stick it up there on YouTube. So if you want to look at two talking heads on screen, you can.

    Shel Holtz: Well, yeah, the video gets recorded whether we want it to or not. So we might as well use it for people who prefer to get it that way.

    Neville Hobson: Right. We might as well use it. Exactly. You can see our facial expressions. When I go like that, you can see that. But I think this is worth reading, Paul’s post. The thought in your mind if you’re thinking about a podcast is: start with the goal first. Don’t think about how many downloads you get and how you’re going to be like Joe Rogan. I often think those comparisons—when people say, “Joe Rogan’s podcast got 65 million downloads”—talk about stuff that’s completely irrelevant to what you’re likely to achieve with a B2B podcast. Let’s actually go. Which is you better have big budget.

    Shel Holtz: Yeah, and there are goals that you can assign to a podcast that have nothing to do with ROI, nothing at all. It could be that you are trying to change the perception of your organization: “We’re not a stodgy organization. We have that reputation. We need to change it. Let’s get a fun, loose podcast out there so that it starts to move the needle in the other direction—that this would be a fun place maybe to come work”. There are podcasts that are aimed at attracting new recruits to the organization.

    Neville Hobson: Right, we mentioned that, yeah.

    Shel Holtz: There are podcasts that are aimed at promoting thought leadership. And of course you need to know what your goal for thought leadership is, but none of these are going to be directly tied to new revenue. That would be really, really hard to do.

    Neville Hobson: You tie it to other goals that you could measure. So you’ve got to have that. Yeah.

    Shel Holtz: Exactly. And you can measure that as long as you know what it is at the point where you start. You mentioned that Paul did make the point: isn’t text better? When we started this podcast, when there were about 400 podcasts, most podcasts talked about podcasting. That was the theme. Every podcast was, “Let’s talk about podcasting”. And there was a lot of conversation back then about why audio is better. And I mean, there were some critics. I remember one person said, “I can read five articles in the time it takes to listen to one podcast”. But my answer was, “Yeah, but I can’t read any articles when I’m driving my car.” But I can listen to a podcast for me, audio—and this is not true of video, by the way—audio is the only form of media that’s available to us that people can pay attention to when they’re doing something else, whether it’s folding laundry or working out or walking the dog or driving somewhere or mowing the lawn—whatever it might be, you can listen and absorb information. You can’t read; you can’t watch a video.

    God help me if I ever see anybody driving and watching a video at the same time. I actually did see that. I saw somebody had their phone on the car and he had a video playing. It wasn’t the road ahead of him or behind him; it was a TV show or something. And I went, “My God,” I mean, that’s worse than being on your phone. But I continue to maintain that that is true: the value of audio is the ability to listen when you’re doing something else. And there’s also been studies about emotion from hearing somebody’s voice—that you’re able to connect with that much more quickly than reading a quote. Where this is leading me is that if you are going down the road of producing a podcast, know why that format is of value to you. Why is that the approach to take in terms of the goal that you’re trying to achieve? Is that emotional connection important? Are you trying to reach an audience that has limited time and may listen to your show when they’re doing something else?

    Finally, a podcast could be part of a larger campaign. It can be just one element. Could be it’s the audio version of something that we are producing for people who aren’t going to partake of another element of this that was produced. I am wrapping up work on a book that—the proposal is almost ready to go. There is an agent waiting to look at it. It’s probably going to get published. When it’s published, the proposal calls for there to be a Substack-like newsletter to go along with this and a new podcast that I am going to be launching with Steve Crescenzo on internal communications right here on the For Immediate Release Podcast Network. It’s just one element, but the main piece is the book, right? It’s—it’s not the podcast. The podcast is supporting.

    And one more thing is that you talk about podcasting being in a very different place today than it was five years ago. One of the things that defines that is the fact that you are now seeing news made based on what somebody says on a podcast. It’s no longer what they said on an interview show or in a speech—well, it is—but in addition to that now, on a podcast where he was interviewed, this politician said this or this business leader said that. So that might be another reason that you want to podcast is as a way to get these quotes out there that might get picked up elsewhere and make news. So I think shoehorning podcasting into this one ROI bucket is a mistake. And yet Paul is absolutely right: his bottom-line conclusion, which is you’d better know what it is you’re trying to achieve with this before you push that record button.

    Neville Hobson: Yeah, that’s the bottom line. Absolutely right. So goal-setting is key. Start with that, not how many downloads you expect and can you arrive with Joe Rogan or whatever it might be. So good stuff, that, I have to say. Okay, so our final story today—we’re back to the AI topic. Question: Are chatbots—are chatbots the new influencers?

    Shel Holtz: Everything that goes around comes around.

    Neville Hobson: For the past two decades, digital marketing has largely been about visibility. First it was banner ads, then search, then social, then influencer marketing. Each wave brought new tools, new behaviors, and new anxieties. Now, according to a recent New York Times piece, we’ve entered another phase: chatbots are the new influencers and brands have to woo the robots. The article describes how companies are discovering that when customers ask ChatGPT, Gemini, or Claude, for example, about a product or provider, the answer that comes back may not reflect what the company believes about itself. In one example, a healthcare software firm asked chatbots about its own offerings and found outdated, incomplete, and sometimes misleading information being surfaced. That moment triggered a realization: if AI models are shaping how people consume information, then influencing those models becomes part of marketing strategy.

    This has been framed as the next evolution of SEO, says the New York Times. Except now it has a new acronym: AEO (Answer Engine Optimization) or GEO (Generative Engine Optimization)—a topic we discussed last September in a For Immediate Release interview with Stephanie Grober at the Horowitz Agency in New York. Great conversation that was. Instead of trying to rank on page one of Google, brands are now trying to influence how large language models synthesize and present information in response to prompts. That changes the game. Chatbots don’t care about vibe, emotional resonance, or brand storytelling. They prioritize clarity, structured detail, and volume. Some brands are flooding the zone with highly targeted content. Others are obsessively auditing Reddit because Reddit turns out to be one of the most cited sources in AI-generated answers. In effect, the brand is no longer competing only for human attention; it is competing for algorithmic interpretation.

    That’s actually well said there, Shel. We talked about this very topic at least twice in the last six months of last year. Not only humans; you’ve got to look at the bots as well. I think that introduces a deeper shift. Historically, search engines pointed users to sources. Chatbots increasingly summarize, recommend, and decide what is worth mentioning. The intermediary is no longer neutral. It synthesizes, which means the battleground for reputation is moving upstream from persuasion to data conditioning.

    But here’s the counterpoint. We’ve been here before and we’ve discussed it in this podcast before. Every major digital shift has been framed as existential. SEO was supposed to change everything, then social algorithms, then influencer marketing. Each time an optimization industry sprang up, each time brands flooded the zone with content, and each time the platforms evolved in response. So the question is: Is this genuinely a structural shift in how reputation is constructed, or simply the next optimization cycle dressed up as revolution? Because there’s a real risk here. If brands begin producing vast volumes of content purely to influence AI outputs, do we elevate substance or do we accelerate a new kind of synthetic noise? Could be all that AI slop we’ve been hearing about a lot recently. And if Reddit posts and forum threads are disproportionately shaping chatbot answers, are we witnessing democratization of influence or amplification of unverified commentary? So are chatbots truly the new influencers we must court, or are we watching the early stages of another marketing arms race that may look very different once the models mature? What do you think, Shel?

    Shel Holtz: It’s a fraught topic. I mean, first of all, as organizations trip over themselves to figure out how to appear in AI query responses and appear the way they want to, is that going to taint AI responses to the point that they’re no better than a Google search response? I mean, you remember the original Google where you typed in a query and you got 10 items that were directly related to what you were interested in. And now you have to wade through the ads and the other crap that populates the Google search results before you get to anything that’s even remotely relevant.

    Neville Hobson: Yeah. Slop is the word, not crap, slop.

    Shel Holtz: Yeah. Okay, yeah. But you have some other issues here. We hear that Reddit figures prominently in the results. And then you hear from somebody else: No, no, no, it’s earned media that is prompting what gets injected into the responses to queries in the large language models. I just saw—this was just published on February 18th—a study that found 44% of ChatGPT citations come from the first third of whatever content it was that they found. So, you know, do you top-load your content with the main information that you want the AI models to grasp, even if that’s not necessarily the way you want people to read the content that you’re producing?

    And each model does something different. The fact that ChatGPT citations come from the first third of content doesn’t mean that Claude’s do or Gemini’s or Grok’s. And then every time they release a new model, has it changed? So I think we could be chasing our tails with this kind of information. Are chatbots the new influencer? Well, they’re a new influencer. Certainly people are getting information from these—I do. I say, “This product isn’t working for me. What are the alternatives?” And it tells me, and I’m sure it’s leaving out good products that just haven’t got their information into the places where it’s going to be absorbed by an AI being trained on this content or searching.

    So, you know. I think we just need to produce good content that answers questions. I—we talked about this a couple of months ago. When you look at the tools that are being implemented in the enterprise, employees are no longer reading the articles that we produce that say, “Here’s the justification and the context and the background for the change that the organization’s going through.” They type a query and they get a reply. Where’s that reply coming from? It’s not coming from the context that we provided unless we top-load, front-load the content with that answer in order to accommodate the chatbot. Is that what we want? This is probably a time to be rethinking the way we communicate altogether because of this situation. But I think creating good content that does a good job of answering questions, that puts the main information at the top…

    Neville Hobson:

    Shel Holtz: I mean, you know, somebody ought to invent an inverted pyramid style of writing that starts with the who, when, where, why before you get to the, you know, the detail. Just do good content and you’ll be fine.

    Neville Hobson: That’s a good tip, I think. To me, just seems like everything is so manipulated. I was thinking this the other day, something I was searching for online, and I looked at what Google produced. Because Google, by the way, really has improved hugely in the last six months in terms of what it actually offers you when you do a search term. The AI generates a summary of the top result that comes, the citations that it includes that you can click on if you want. My experience is, I often find that that summary is good enough for what I need. I might scroll down to see who else is saying what. And then you’ve got little drop-downs of other responses to that search term. Great. And it usually gives me what I want.

    But basically, I’m thinking when I see stuff like this: the manipulation is huge. Would it not be simpler if we just ditched all this stuff? No, that’s not the answer. The world’s moved on. We have to live with this. But it makes it difficult to trust anything the way you used to be able to. So do I trust this answer either because it’s—Google is giving it to me, therefore implies I trust Google? Or is it because it looks about right, that’s what I’m looking for? So I trust the responder to that answer? I don’t know. You have to make your own judgment call on this because if you’re using another search engine, it’s going to be very different.

    Shel Holtz: Yeah.

    Neville Hobson: If you use your chatbot—and that’s actually quite interesting because whether it’s ChatGPT, whether it’s Claude, whatever it might be, using your chatbot, not a search engine—how do you feel about that? Do you implicitly trust the chatbot and what it’s telling you? Would it be different than what Google would tell you if you did a Google search? Probably yes. Not in terms of meaning, but the words are going to be different, obviously, and maybe the sources will be different. So if you need to do that, fine. I don’t think you do typically need to do that. You just go to Google or whatever it might be that you’re accustomed to, that you trust, search and get your answer.

    But you’ve now really got to—and particularly in light of the story we talked about earlier about the developer who was stitched up by an AI agent that damages reputation—that kind of content might show up in search results too. So this is the landscape we’re in now. You have to get used to it.

    Shel Holtz: I still find the the top 10 blue links on the first search engine results page from Google are far less valuable than they used to be. I still find that the first three or four are paid and irrelevant or… I see it all the time.

    Neville Hobson: I don’t see that. I don’t see that at all. I don’t see paid at all in the first results. I see it a little further down. Yeah, okay, interesting. Maybe it’s different in here. I’m doing google.co.uk, not google.com. So maybe there’s a difference. Yeah.

    Shel Holtz: I definitely do. Listeners, what are you seeing on Google? I’ve been using Perplexity. Are you logged in to Google when you’re doing this? Okay. I have been using Perplexity more and more because I’m able to refine my search, saying, “I’m looking for this, not this, and I need it from articles that have been published in the last six months.” And it does an excellent job of providing me with great results. Now I haven’t compared it to what Google would give me, but I have to believe that it’s more relevant because it is trying to satisfy me rather than satisfy the advertisers who have paid to have their links promoted on Google.

    Neville Hobson: Yeah, yeah, typically. Okay. It’s funny. I mean, I’ve just done a search on Google right now. There’s not a single sponsored link in my list at all. Not one. And I do see them occasionally, but they’re kind of halfway down; it says “sponsored”. I’m not seeing any for this search term I just searched on. So I’m scrolling further down the page—I’m not seeing any. Results are personalized. Try it without personalization; maybe that might make a difference. But so I’m quite happy with what I see from this. I see in this particular example…

    Shel Holtz: Hmm.

    Neville Hobson: …it gives me the text upfront, as you know, “to see more”. That will tell me more about that. Again, scrolling down the page, don’t see anything that’s saying sponsored, which is what you normally do see. I don’t know. But I mean, the point is, I think you need to determine yourself: Do you trust what it’s telling you? Are you happy with that result, whether it’s search at Google or whether it’s your favorite chatbot? I was using Perplexity a lot, Shel. I really was. I stopped using it entirely. I didn’t like what it was doing. I didn’t like it at all. Yeah. But I have to tell you, I stopped flipping from one tool to another to see. No, I stick with what I like, what I know works for me. And I don’t bother trying to second-guess it. But let me see what Gemini says about this. Although I do that occasionally, I have to say.

    Shel Holtz: I had stopped for a while and I’ve gone back to it. It’s improved. It has improved considerably in the last couple of months.

    Neville Hobson: I did a research project about two weeks ago where I did spend time trawling different tools and getting complementary or different results. I then had one of those—ChatGPT—summarize it all. But hey, it’s a lot of work and I didn’t need to do that. So I’m not going to do that as a matter of course.

    Shel Holtz: I did. I, on our intranet, have a “construction term of the week”. This has been going on for about six and a half years. Every week, a new definition of a new term. I’ve gone through everything that has been provided to me. So now I’m asking an AI: “Give me a list of 20 construction-related terms.” And I’ll get more specific than that. I’ll say, “around water infrastructure projects” or things like this. And I’ll say, “Okay, I like this one. Give me a two-paragraph definition of that.” I’ll copy and paste that definition and go to one of the other LLMs and I’ll say, “Assess this for accuracy, list what you would change and then rewrite it the way you would rewrite it to incorporate your corrections.” And I find that that gets me a much better definition. So I’m frequently bouncing around to these.

    I also find that I’ll switch which tool I’m using the most based on who’s released the best model most recently because I find the latest Claude model is just amazing, but then Gemini just released a new one that apparently is blowing Claude away. I want to use the one that’s going to give me the best results, not the one I’m most comfortable with. So I’m changing all the time.

    Neville Hobson: Yeah, I find the one I’m most comfortable with is the one that gives me the best results—that I’m very happy with that—but again, our uses are very different. I don’t use it for the kind of stuff that you do when you talk about “You hear these definitions, give me a summary and find the best one” or whatever. I tend not to do that kind of work. But I’m very happy with ChatGPT Plus that I’ve been using for a while now. I use NotebookLM occasionally, particularly when I’m looking at dense academic reports. But I’m kind of OK with that. So the point is, I think—to summarize all of this—that our chatbots are new influencers. I think the New York Times piece is a good piece. It’s a thought-provoking piece. And I think the caveats, as I saw them certainly, are the risk factor that we just spent a while discussing. I think the idea—as the writer mentioned in the Times piece—if Reddit posts are disproportionately shaping chatbot answers, are we witnessing the amplification of unverified content? I think that’s a very good point to make. Hence, even more so—and I don’t know how we’d feel comfortable with this—you’ve got to verify everything.

    I do that. And I find, depending on what it is… I can’t think of a good example, frankly, Shel, but you know… you’ve spent some time, a little bit of time telling your AI system what you want it to do. You might have had a to-and-fro, back-and-forth conversation about that. That’s common for me. Not just “Here’s a prompt and off you go and do it”—no. And it comes back with something; I say, “Fine, what do you mean by this?” or “I want you to do that as well.” Yes, that’s good to highlight that. That goes on all the time. And then the checking of things takes longer than that. And I’m totally OK with that. Because I need to—and this must apply to everyone—I need to be sure… Or maybe it doesn’t. Maybe it doesn’t apply to folks who work in Deloitte. Sorry, I shouldn’t have said that, but it occurs to me.

    You need to check it for your own peace of mind—that what you’re sharing with the other person, whether it’s a client or a colleague, is accurate to your best knowledge—that there’s nothing you’ve done that would diminish the accuracy of that or anything you haven’t done, meaning not verified or checked everything. So—and like you said earlier in our early discussion about this, there’s a lot more to this than just verifying. Yeah, I get that too. But it takes time. And maybe that’s why people don’t do it. They see the folks who do it this way see it as the easy tool to improve their—to dump all this stuff on the chatbot so they can either take the day off or do other things. I mean, that’s—I don’t believe that’s everywhere. But some people will think that. So it is a tricky one to answer. And I think that we just got to do what we’re comfortable with that meets our objectives and take as much care as possible in producing the best work we can.

    Shel Holtz: Yeah, and for communicators, recognizing that chatbots are a new influencer means that we have to think about how we take advantage of that. And I’m going to emphasize again: they are a new influencer, not the new influencer. Kim Kardashian has not hung her head in shame and retreated into a dark room to wait to die. She still has millions and millions of followers and holds up a product and it drives sales. You know, the—the old influencers haven’t gone anywhere and still warrant some attention.

    Neville Hobson: Well, true. So the Times, though, says—the question they asked is: Are chatbots the new influencers? So our answer to that would be: No, they are one of the new influencers.

    Shel Holtz: Right, yeah. No. Yes, add them to the mix. So that’ll wrap up this episode of For Immediate Release, episode number 502, our long form episode for February. We do hope that you will comment on this. All of our comments these days come from our LinkedIn posts. So check LinkedIn, follow either one of us, but we also share these posts on Facebook in three places: we have a For Immediate Release Podcast Network community and a For Immediate Release page, in addition to you and I sharing them individually. We’re also on Threads and BlueSky. Leave a comment. Any of those places, we’ll pick it up and share it in the March long form episode.

    You can also send us an email to [email protected]. You can attach an audio file. You can record that audio file directly from the For Immediate Release Podcast Network website—there’s a “send voicemail” tab over on the right-hand side. I actually got a voicemail from the website last month, but it was just somebody being obscene. It had nothing to do with communication, but I got excited. We got one of those from Speakpipe, which is the vendor who does that. You can leave a comment directly in the show notes. I mean, it is a blog. There’s a place to put comments in a blog. Go figure.

    Neville Hobson: Wow, should have played it. An obscene phone call, okay.

    Shel Holtz: All these ways to comment, please do and be part of this conversation. And our next long form episode will be recorded on Saturday, March 21st. We will drop that on Monday, March 23rd. Until then, that will be a “30” for For Immediate Release.

    The post FIR #502: Attack of the AI Agent! appeared first on FIR Podcast Network.

    23 February 2026, 8:10 am
  • 21 minutes 46 seconds
    FIR #501: AI and the Rise of the $400K Storyteller

    AI isn’t replacing communicators — it’s amplifying the value of communication, especially storytelling and strategic writing. In this short, midweek FIR episode, Neville and Shel explore how the hottest jobs in tech are increasingly about telling stories, not writing code, with Netflix, Microsoft, Adobe, Anthropic, and OpenAI all hiring communications and storytelling teams at salaries ranging from six figures up to $775,000 per year. Even AI labs themselves are posting compensation packages around $400K for storytelling and communications roles, signaling that they understand the irreplaceable human value of meaning-making in an age of automated content generation.

    The distinction Neville and Shel highlight between traditional messaging and true storytelling proves critical: conventional communications start with what the brand wants to say, while storytelling starts with what audiences actually care about. The strongest communicators will be those who move beyond prescriptive messaging to tell genuine human stories.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, February 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Neville Hobson: Hi everyone and welcome to For Immediate Release. This is episode 501. I’m Neville Hobson.

    Shel Holtz: I’m Shel Holtz. And here’s some good news for communicators. Artificial intelligence isn’t replacing us, it’s amplifying the value of communication itself, especially storytelling and strategic writing. If you’ve been feeling that AI spells doom for writers and communicators, the labor market is telling a very different story. We’ll tell you that story right after this. Let’s start with something concrete. The hottest jobs in tech right now aren’t about writing code or managing data. They’re about telling clear, compelling human stories. Recent hiring trends show that giants like Netflix, Microsoft, Adobe, Anthropic, and OpenAI are aggressively expanding communications and storytelling teams with roles offering from six figures up to as much as $775,000 a year for senior leadership positions without any requirement to write a line of code. Why? Because AI has flooded the internet with cheap automated output, what some observers are calling slopaganda. I love this word, slopaganda. Hadn’t heard it before I read that article, but millions of words get generated every minute. Most of it lacks clarity, insight, context, and meaning, exactly the things that real communicators deliver. Companies are recognizing that the ability to cut through that noise with strategic narrative creates trust, authority, and differentiation in the market. Even the AI labs themselves, including OpenAI and Anthropic, are willing to pay top dollar for storytellers. One analysis said that nearly $400,000 compensation packages are being posted specifically for storytelling and communications roles at these firms. exactly because humans excel at crafting nuanced messages that machines simply can’t. So here’s the underlying shift communicators need to understand. AI automates… AI automates tasks, but meaning making remains deeply human. Machines can generate text, but they don’t know which stories matter to whom or why. And we keep hearing communicators and writers venting on LinkedIn about machines lacking judgment, empathy, context, and strategic framing, all those hallmarks of great communication. That’s exactly what they’re looking for. And in an age of automated noise, those abilities create value. That’s a theme echoed across industry thinking.

    Shel Holtz: That’s a theme echoed across industry thinking. A Forbes piece on storytelling in the age of AI highlights that storytelling is one of the most powerful tools we have and one of the most powerful tools leaders have. It helps audiences remember facts wrapped in emotion, connect data to human experience, and anchor organizational vision in something people can feel and act on. Another Forbes analysis argues that storytelling isn’t just about communication, it’s also a career pathway. When individuals and organizations tell clear stories about evolving roles, skills, development, and future opportunities, they make the future feel navigable rather than threatening. This matters for internal communication too. HR and people leaders are increasingly using narrative to frame change and build resilience. When employees feel adrift and amid all the talk of AI disruption, a coherent story about how the organization is evolving and where people fit in. is one of the most effective ways to build trust and engagement. Even the broader hype narrative around AI’s impact on jobs, including viral essays, warning of sweeping automation, underscores this point. Some of the loudest voices talking about disruption are exactly those using storytelling to shape a narrative about the future. But the data so far suggests that the real impact of AI isn’t mass job elimination, it’s task transformation. with humans shifting into roles that emphasize strategy, creativity, judgment, and communication, exactly the space where we storytellers thrive. So for communicators who worry that AI might make them obsolete, here’s the reality. Your craft isn’t threatened, it’s elevated. AI makes routine work easier, but narrative leadership, strategic framing, and contextual clarity are becoming even more essential. The labor market isn’t pulling back its investment in communicators, it’s paying up for them because the ability to tell a clear human story is now a competitive advantage. With the world drowning in automated content, meaning is scarce. And communicators are the ones who turn noise into narrative, confusion into clarity, and information into influence. That’s not something AI replaces, it’s something only humans can do well. And that’s why even in an AI era, talented communicators are irreplaceable and more valuable than ever. And by the way, if the tech companies feel the need to cut through the noise created by all that slopaganda, I got to use that word again, other industries will figure out sooner or later that they need to as well.

    Neville Hobson: Listening to what you’re saying there, Shel, what strikes me is how similar themes are now surfacing here in the UK. So the Times ran a piece recently about companies hiring chief storytellers, specifically to cut through what they and everyone calls AI slop. What’s interesting is that it isn’t framed as anti-AI, it’s framed as a response to saturation. When content becomes easy and abundant, meaning becomes scarce. Recruiters are saying demand for storytelling roles has doubled in the past year. and the way they define storytelling isn’t about clever copy. It’s about starting with what people care about rather than what the brand wants to say. There’s also a strong internal dimension, storytelling being used to align remote teams, break down silos and create shared culture. So I’m left wondering whether this chief storyteller trend is something genuinely new or whether we’re simply rediscovering the strategic craft of communication in an AI saturated environment. And finally, if AI makes it easier to generate content, does that mean communicators need to become curators of meaning rather than producers of material?

    Shel Holtz: Interesting question. And I think that this is somewhat different. We have been telling stories, but I think you have to define what we mean by storytelling here, because we write stories that aren’t really stories. It’s just a term that we use as a synonym for article. I wrote a story the other day. Was it really a story or was it a communication piece?

    Shel Holtz: There are so many stories that we could tell in the world of organizational communication that are really just prescriptive or a statement of fact. We’re getting the news out, but we don’t have a beginning, middle, and end. We certainly don’t have a protagonist. We’re not looking at Joseph Campbell’s hero’s journey and trying to figure out. how to apply that to the tales that we tell. There is a guy out there, Donald Miller, who’s got this thing called Story Brand, which is fascinating, that is designed to put your customer into the story as the hero and the company as the mentor or the guide who helps the hero achieve its goal through its journey. And I really like it. And there are free tools that you can use to map all this out for your brand or your product. But it gets us away from saying, isn’t this product great? Look how great it works and tell a genuine story instead. And I think this is why narrative and story rather than communications or public relations are the labels that are being attached to these job descriptions that are all over LinkedIn. When I saw the story, I went and looked and there are dozens and dozens of them. And the salaries are jaw dropping when you consider that the typical, you know, communication manager is making about 108,000 a year, according to one of these articles. you know, $400,000 with all benefits, with three days remote work, because I read these job descriptions. This is very encouraging for our profession. But if you’re the kind of communicator out there who writes these articles that just says we have an employee assistance program, It offers the following bulleted services. You should call if you have emotional or financial problems. That’s not what they’re looking for. They’re not looking for you. They’re looking for the guy who wrote that article that I’ve referenced 50 times on this podcast about the employee who was divorced and depressed and started drinking and gained 100 pounds and finally called the EAP when he hit rock bottom. And they worked with him to find something that really excited him and it turned out to be ballroom dancing and now he’s a national champion traveling around the world. He’s lost more than a hundred pounds. He’s quit smoking and drinking and all because of the EAP. Which of those two stories are you more likely to read? It’s absolutely the story of the guy who used the EAP. People can relate to that. People don’t even read the crap that says we have one and here’s what it offers. So I think cutting through the noise with genuine stories that tell the tale of what the organization is trying to convey, that’s what they’re looking for.

    Neville Hobson: So interesting. So the title Chief Storyteller, that sounds new and fashionable, right? But when you unpack it, much of it looks like what strong communication leaders have always done. Alignment, translation, cohesion, behavioral framing opens up a richer debate, I think. Is this a genuine new C-suite function or a rebranding of strategic communication crafted in an AI era? It sounds a lot like the latter to me.

    Shel Holtz: It sounds a lot like the latter, but I think there’s a bit of the former as well, because we’re talking about a transition of role. I think communicators who are employed right now want to start telling more stories if they want to keep their jobs, because if all you’re doing is writing the stuff that can be written by AI just by giving it the facts and say, turn this into an article, I think you’re toast. But if you can tell a genuine story that moves people, then your job is probably secure and you may be qualified to apply for one of these $400,000 a year jobs. I don’t think they’re going to hire the average communicator who’s doing a pretty good job at their organization, even if they’re at the C-suite level, if they can’t put together the kind of narrative that these companies are looking for. Certainly there are companies that are doing this and there are communicators in those companies that are doing this, but I don’t think it’s most. I think most are cranking out the typical content that is just conveying the news. And I think basic journalism, the who, what, when, where, why, if I can pop that into Claude or ChatGPT or Gemini, especially if I’ve trained it on my writing style, which I have, by the way, on Gemini, it’ll turn out a passable article that then you can edit in 15 minutes and be done. That’s not what they’re looking for. I think that they would argue that that probably is slopaganda. And… This is exactly the noise that they’re looking for somebody to help them cut through.

    Neville Hobson: So one of the strongest lines in the times piece is the distinction between messaging and meaning. Traditional comms starts with what the brand wants to say, says the times. Storytelling starts with what people care about. That’s a strategic pivot, I would say. So messaging is output driven, meaning is audience driven. AI is good at output, humans are better at contextual meaning. So is that? Now we should be looking at this as a shift from messaging to meaning.

    Shel Holtz: absolutely. I think that’s exactly what we’re talking about here. And the focus on the audience. And again, this is what Donald Miller’s story brand, who has paid us no consideration for the reference here, is exactly what he does. He puts the customer at the center of the company or the brand’s story. And I think that’s what’s different. I think that’s the transition or the pivot that communicators need to make. I don’t think it’s difficult. And if you haven’t… written fiction, I would suggest that you read about Joseph Campbell’s Hero’s Journey. There’s a wonderful book, I can’t remember the author’s name, but I’ve read it twice called The Writer’s Journey. he talks about, I mean, he’s focused on writing fiction, but he talks about how you apply the hero’s journey to things like Star Wars and The Wizard of Oz. And he has these tropes that everybody is familiar with. that he uses to explain how to write this way. And he tells you that every successful film in particular, and novels as well, uses this formula. And I read it twice because I really had to unpack it in a way that worked in organizational communication rather than novel and film writing. But it does, it works. And then I found Donald Miller in his story brand and I said, there it is right there. fill in, in the boxes, who is the mentor, who is the other characters that appear in this formula. And it’s well worth taking a look at and his book is worth reading as well. I’ll have a link to Story Brand in the show notes.

    Neville Hobson: Yeah. So I’m just going to go through my mind thinking about where this conversation we’re having here. And if we look at the, which to me makes complete sense, and the Times article and the Business Decider piece, I think support this, that the shift is definitely from messaging to meaning, something we’ve talked about quite a bit. The Times piece talks about the the noise not being the problem, it’s indistinguishable noise. And that makes sense, that kind of metaphorical phrase that reminds me of conversations we’ve had before, which talks about what a communicator is being using artificial intelligence to enhance their abilities. So I’m just trying to see where the kind of path looks ahead for this. It seems to me that AI is going to play an even more significant role in the future for communicators who are shifting from messaging to meaning. And I must admit, I don’t believe that the scenario you painted earlier about the kind of, you know, the communications person who has… been right doing the stuff he or she’s been doing for years, that’s fine. Keep doing that because there’s a market view. I don’t think that’s true. I think AIs see the threat for those people. Yeah. So if AI is good at output, according to the kind of, what are the concluding points in the times piece, humans are better at contextual meaning. That surely is then what people are looking for to pay half a million bucks or whatever is a salary.

    Shel Holtz: yeah, I agree with that.

    Neville Hobson: to achieve storyteller. I think this huge confusion here and inserting into the picture the phrase chief storyteller, where it’s just a fancy job title basically, doesn’t help with this, it seems to me. it’s inevitable, I suppose you’re going to get that. as you said, I’ve seen it’s all over LinkedIn, that chief storyteller is an executive function. Yeah, but that’s not the right interpretation for that, don’t believe. So it doesn’t help clarify what the picture is here.

    Shel Holtz: I don’t know. I would be very curious to look at the org charts of the companies that are seeking these positions to see if it is separate and distinct from the public relations or communications function. We talked several weeks ago about the proposed new definition of public relations, and it goes way beyond this. I’m thinking, and I don’t know this for a fact,

    Shel Holtz: But I’m thinking that what these companies are doing is creating a new function that will live alongside and presumably under the same umbrella as the PR or corporate communications department, which is building relationships with key stakeholders. But the storytellers are out there creating the content that’s going to cut through the slop aimed at particular audiences who are ripe for this kind of storytelling. I I was about to say messaging, it’s… trying to get away from messaging. And the PR department will continue to do the earnings releases and the thought leadership and the negotiations with critics and all of the stuff that PR typically does. I don’t get the impression that these jobs sit in the public relations department.

    Neville Hobson: No, I would say not, particularly as, for instance, one point that The Times made is that there’s a significant element of team building and so forth. So internal focus in organizations for this sort of role. it’s not just a public relations external function by any means. It’s interesting you mentioned the definition. I published a post on my blog this morning about that actually. looking at what the PRCA has done. It’s only one professional body. I’m thinking this isn’t going to fly unless everyone gets behind it. So that’s a different topic than what we’re talking about. it sort of fits in there because the role of… I just have a problem with this chief storyteller title, frankly, It doesn’t really fit what this role actually is. And I do believe, and you’ve partly prompted this kind of clarity in my thinking on this, that this is about messaging, it’s not about content production. That’s what AI does. And the interpretation of it, the meaning and significance of it is what the human does. Now, if you can, let’s say, present your skill as something in that area. to an organization who’s willing to pay $400,000. Again, be interested to see the job description behind that salary level. I haven’t seen that. I’ve I’ve not actually looked, I must admit. But it’d be interesting how they have described the role they’re willing to pay 400 grand for. So I would imagine they’re absolutely swamped with applications, which is where AI comes into play. AI comes into play well to sift out all the no-hopers, basically.

    Neville Hobson: But it is interesting, it is very interesting. And this could be a great catalyst for the discussion about the role of a communicator in organizations in light of this development. That seems to me to be something good to have.

    Shel Holtz: I can’t imagine somebody at OpenAI or Anthropic sifting through hundreds of resumes or probably thousands of resumes. they’re absolutely feeding them all to AI. I’d be shocked if not. And for the record, there are also some of these positions that don’t have storytelling in the title. I saw a couple that had narrative in the title instead. But I think they’re all getting to this notion of telling a powerful story that evokes emotions and pulls

    Shel Holtz: audiences in rather than advertising or traditional marketing speak. That’s what’s going to cut through the, I get to say it again, slopaganda. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #501: AI and the Rise of the $400K Storyteller appeared first on FIR Podcast Network.

    16 February 2026, 6:36 pm
  • 19 minutes 16 seconds
    FIR #500: When Harassment Policies Meet Deepfakes

    AI has shifted from being purely a productivity story to something far more uncomfortable. Not because the technology became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics. An article in HR Director Magazine argues that AI-enabled workplace abuse — particularly deepfakes — should be treated as workplace harm, not dismissed as gossip, humor, or something that happens outside of work. When anyone can generate realistic images or audio of a colleague in minutes and circulate them instantly, the targeted person is left trying to disprove something that never happened, even though it feels documented. That flips the burden of proof in ways most organizations aren’t prepared to handle.

    What makes this a communication issue — not just an HR or IT issue — is that the harm doesn’t stop with the creator. It spreads through sharing, commentary, laughter, and silence. People watch closely how leaders respond, and what they don’t say can signal tolerance just as loudly as what they do. In this episode, Neville and Shel explore what communicators can do before something happens: helping organizations explicitly name AI-enabled abuse, preparing leaders for that critical first conversation, and reinforcing standards so that, when trust is tested, people already know where the organization stands.

    Links from this episode:

    The next monthly, long-form episode of FIR will drop on Monday, February 23.

    We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].

    Special thanks to Jay Moonah for the opening and closing music.

    You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.

    Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.

    Raw Transcript:

    Shel Holtz: Hi everybody, and welcome to episode number 500 of For Immediate Release. I’m Shel Holtz.

    Neville Hobson: And I’m Neville Hobson.

    Shel Holtz: And this is episode 500. You would think that that would be some kind of milestone that we would celebrate. For those of you who are relatively new to FIR, this show has been around since 2005. We have not recorded only 500 episodes in that time. We started renumbering the shows when we rebranded it. We started as FIR, then we rebranded to the Hobson and Holtz Report because there were so many other FIR shows. Then, for various reasons, we decided to go back to FIR and we started at zero. But I haven’t checked — if I were to put the episodes we did before that rebranding together with the episodes since then, we’re probably at episode 2020, 2025, something like that.

    Neville Hobson: I would say that’s about right. We also have interviews in there and we used to do things like book reviews. What else did we do? Book reviews, speeches, speeches.

    Shel Holtz: Speeches — when you and I were out giving talks, we’d record them and make them available.

    Neville Hobson: Yeah, boy, those were the days. And we did lives, clip times, you know, so we had quite a little network going there. But 500 is good. So we’re not going to change the numbering, are we? It’s going to confuse people even more, I think.

    Shel Holtz: No, I think we’re going to stick with it the way it is. So what are we talking about on episode 500?

    Neville Hobson: Well, this episode has got a topic in line with our themes and it’s about AI. We can’t escape it, but this is definitely a thought-provoking topic. It’s about AI abuse in the workplace. So over the past year, AI has shifted from being a productivity story to something that’s sometimes much more uncomfortable. Not because the technology itself suddenly became malicious, but because it’s now being used in ways that expose old behaviors through entirely new mechanics.

    An article in HR Director Magazine here in the UK published earlier this month makes the case that AI-enabled abuse, particularly deepfakes, should be treated as workplace harm, not as gossip, humor, or something that happens outside work. And that distinction really matters. We’ll explore this theme right after this message.

    What’s different here isn’t intent. Harassment, coercion, and humiliation aren’t new. What is new is speed, scaling, credibility. Anyone can use AI to generate realistic images or audio in minutes, circulate them instantly, and leave the person targeted trying to disprove something that never happened but feels documented. The article argues that when this happens, organizations need to respond quickly, contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Not just to protect the individual involved, but to preserve trust across the organization. Because once people see that this kind of harm can happen without consequences, psychological safety collapses.

    What also struck me reading this, Shel, is that while it’s written for HR leaders, a lot of what determines the outcome doesn’t actually sit in policy or process. It sits in communication. In moments like this, people are watching very closely. They’re listening for what leaders say and just as importantly, what they don’t. Silence, careful wording, or reluctance to name harm can easily be read as uncertainty or worse, tolerance. That puts communicators right in the middle of this issue.

    There are some things communicators can do before anything happens. First, help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity. Second, prepare leaders for that first conversation because tone and language matter long before any investigation starts. And third, reinforce shared expectations early. So when something does go wrong, people already know where the organization stands. This isn’t crisis response, it’s proactive preventative communication. In other words, this isn’t really a story about AI tools, it’s a story about trust — and how organizations communicate when that trust is tested.

    Shel Holtz: I was fascinated by this. I saw the headline and I thought it was about something else altogether because I’ve seen this phrase, “workplace AI abuse,” before, but it was in the context of things like work slop and some other abuses of AI that generally are more focused on the degradation of the information and content that’s flowing around the organization. So when I saw what this was focused on, it really sent up red flags for me. I serve on the HR leadership team of the organization I work for. I’ll be sharing this article with that team this morning.

    But I think there’s a lot to talk about here. First of all, I just loved how this article ended. The last line of it says, “AI has changed the mechanics of misconduct, but it hasn’t changed what employees need from their employer.” And I think that’s exactly right. From a crisis communication standpoint, framing it that way matters because it means we don’t have to reinvent values. We don’t have to reinvent principles. We just need to update the protocols we use to respond when something happens.

    Neville Hobson: Yeah, I agree. And it’s a story that isn’t unique or new even — the role communicators can play in the sense of signaling the standards visibly, not just written down, but communicating them. And I think that’s the first thing that struck me from reading this. It is interesting — you’re quoting that ending. That struck me too.

    The expectation level must be met. The part about not all of it sitting in process and so forth with HR, but with communication — absolutely true. Yet this isn’t a communication issue per se. This is an organizational issue where communication or the communicator works hand in glove with HR to manage this issue in a way that serves the interest of the organization and the employees. So making those standards visible and explaining what the rules are for this kind of thing — you would think it’s pretty common sense to most people, but is it not true that like many things in organizational life, something like this probably isn’t set down well in many organizations?

    Shel Holtz: It’s probably not set down well from these kinds of situations before AI. Where I work, we go through an annual workplace harassment training because we are adamant that that’s not going to happen. It certainly doesn’t cover this stuff yet. I suspect it probably will. But yeah, you’re right. I think organizations generally out there — many of them don’t have explicit policies around harassment and what the response should be.

    I think the most insidious part of how deepfakes are affecting all of this is that they flip the burden of proof. A victim has to prove that something didn’t happen, and in the court of workplace opinion, that’s really hard to do. It creates a different kind of reputational harm.

    Neville Hobson: Yeah.

    Shel Holtz: From traditional harassment, the kind we learn about in our training — you know, with he said, she said type situations — there’s a certain amount of ambiguity and people are trying to weigh what people said and look at their reputations and their credibility and make judgments based on limited information available. With deepfakes, there’s evidence. I mean, it’s fabricated, but it’s evidence. And some people seeing that before they hear it’s a deepfake just might believe it and side with the creator of that thing.

    The article does make a really critical point though, and that’s that it’s rarely about one bad actor. The person who created this had a malicious intent, but people who share it, people who forward it along and comment on it and laugh about it — that spreads the harm and it makes the whole thing more complex and it creates complicity among the employees who are involved in this, even though they may think it’s innocent behavior that just mirrors what they do on public social media. And from a comms perspective, that means the crisis isn’t just about the perpetrator, right? It’s about organizational culture. If people are circulating this content, that tells you something about your workplace that needs to be addressed that’s bigger than that one individual case.

    Neville Hobson: Yeah, I agree. Absolutely. And that’s one of the dynamics the article highlights that I found most interesting — about how harm spreads socially through sharing, commentary, laughter, or quiet disengagement. Communicators need to help prevent normalization — this is not acceptable, not normal. They’re often closest to these informal channels and cultural signals. That gives communicators a unique opportunity, the article points out.

    For example, communicators can challenge the idea that no statement is the safest option when values are being tested. Help leaders understand that internal silence can legitimize behavior just as much as explicit approval and encourage timely, values-anchored communication that says, “this crosses a line,” even if the facts are still being established.

    It is really difficult though. Separately, I’ve read examples where there’s a deepfake of a female employee that is highly inappropriate the way it presents her. And yet it is so realistic — incredibly realistic — that everyone believes it’s true. And the denials don’t make much difference. And that’s where I think another avenue that communicators, especially communicators, need to be involved in. HR certainly would be involved because that’s the relationship issue. But communicators need to help make the statements that this is not real, that it’s still being investigated, that we believe it’s not real. In other words, support the employee unless you’ve got evidence not to, or there’s some reason — legal perhaps — that you can’t say anything more. But challenge people who imply it’s genuine and carry that narrative forward with others in the organization.

    So it’s difficult. It doesn’t mean you’ve got to broadcast a lot of details. It means going back to reinforcing those standards in the organization, repeating what they are before harmful behavior becomes part of, as the article mentions, organizational folklore. It’s a tricky, tricky road to walk down.

    Shel Holtz: And it gets even trickier. There’s another layer of complexity to add to this for HR in particular. And that is an employee sharing one of these deepfakes on a personal text thread or on a personal account on a public social network — sharing it on Instagram, sharing it on Facebook — which might lead someone in the organization to say, “Well, that’s not a workplace issue. That’s something they did on their own private network.” But the deepfake involves a colleague at work, and we have to acknowledge that that becomes a workplace issue.

    Neville Hobson: Yeah, it actually highlights, Shel, that therefore education is lacking if that takes place, I believe. So you’ve got to have already in place the policies that explicitly address the label “AI abuse.” It’s a workplace harm issue. It’s not a technical or a personal one. And it’s not acceptable nor permitted for this to happen in the workplace. And if it does, the perpetrators will be disciplined and face consequences because of this.

    So that in itself though isn’t enough. It requires more proactive education to address it — like, for instance, informal communication groups to discuss the issue, not necessarily a particular example, and get everyone involved in discussing why it’s not a good thing. It may well surface opinions — again, depends on how trusted people feel or open they feel — on saying, “I disagree with this. I don’t think it is a workplace issue.” You get a dialogue going. But the company, the employer, in the form of the communicators, have the right people to take this forward, I think.

    Shel Holtz: But here’s another communication issue that isn’t really addressed in the article, but I think communication needs to be involved. The article outlines a framework for addressing this. They say stabilize, which is support and safety; contain, which is stop the spread and investigate — and investigate broadly, not just the creator. I mean, who helped spread this thing around? Yeah, that’s pretty good crisis response advice.

    But what strikes me is the fact that containment is mentioned almost as a technical IT issue when it’s really a communication challenge. Because how do you preserve evidence without further circulating harmful content? This requires clear protocols that everybody needs to understand. So communicators should be involved in helping to develop those protocols, but also making sure that they spread through the organization and are aligned with the values and become part of the culture.

    Neville Hobson: Okay, so that kind of brings it round to that first thing I mentioned about what communicators can do before anything happens, and that’s to help the organization be explicit about standards. Name AI-enabled abuse clearly so there’s no ambiguity and set out exactly what the organizational position is on something like this. That will probably mean updating what would be the equivalent of the employee handbook where these kinds of policies and procedures sit, so that no one’s got any doubt of where to find out information about this. And then proactive communication about it. I mean, yes, communicators have lots to address in today’s climate. This is just one other thing. I would argue this is actually quite critical. They need to address this because unaddressed, it’s easy to see where this would gather momentum.

    Shel Holtz: Yeah. So based on the article, you’ve already shared some of your recommendations for communicators. I think that updating the harassment policies with explicit deepfake examples is important. This is the recommendation I’m going to be making where I work. I think managers need to be trained on that first-hour response protocol. Managers, I think, are pretty poorly trained on this type of thing. And generic e-learning isn’t going to take care of it. So I think there needs to be specific training, particularly out in the field or out on the factory floor, where this is, I think, a little more likely to happen among people who are at that level of the org. I don’t think you’re going to see much of this manager to manager or VP to VP. So I think it’s more front line where you’re likely to see this — where somebody gets upset at somebody else and does a deepfake.

    So those managers need to be trained. I think you need to have those evidence-handling procedures established and IT completely on board. So that’s a role for communicators. Reviewing and strengthening the reporting routes — who gets told when something like this happens and how does it get elevated? And then what are the protocols for determining what to do about it? And include this scenario in your crisis response planning. It should be part of that larger package of crises that might emerge that you have identified as possible and make sure that this is one of them.

    Yeah, this article really ought to be required reading for every HR professional, every organizational leader, every communication leader, because as we’ve been saying right now, I think most organizations aren’t prepared. What the article said is the technology has outpaced our policies, our training, and our cultural norms. We’re in a gap period where harm is happening and institutions are scrambling to catch up. Time to stop scrambling, time to just catch up, start doing this work.

    Neville Hobson: Yeah, I would agree. I think the final comment I’d make is kind of the core message that comes out of this whole thing that summarizes all of this. And this is from the employee point of view, it seems to me. So accept that AI has changed how misconduct happens, not what employees need. Fine, we accept that. Employees need confidence that if they are targeted, the organization will do the following: take it seriously, act quickly to contain harm, investigate fairly, and set a clear standard that using technology to degrade or coerce colleagues is serious misconduct. Those four things need to be in place, I believe.

    Shel Holtz: Yeah. And what the consequences are — you always have to remind people that there are consequences for these things. And that’ll be a 30 for this episode of For Immediate Release.

    The post FIR #500: When Harassment Policies Meet Deepfakes appeared first on FIR Podcast Network.

    9 February 2026, 9:29 pm
  • More Episodes? Get the App