- 34 minutes 10 secondsFIR #513: Why Communications Must Build the Narrative Code for the Agentic Age
Neville and Shel dig into a provocative Harvard Business Review article that argues most marketing teams are structurally unprepared for the speed and scale that agentic AI now enables. The bottleneck, the authors contend, isn’t the technology; it’s the operating model. Neville and Shel connect the piece to conversations FIR has been having for the past year: AI as orchestration rather than automation, professionals shifting from supervisors of tasks to directors of systems, and 2026 increasingly framed as “the year of the agent.”
At the center of the Harvard piece is the idea of a “brand code” — a machine-readable knowledge system that lets specialized AI agents continuously create, adapt, test, and optimize marketing in real time. Communications urgently needs its own equivalent: a “narrative code” containing executive voice profiles, message hierarchies, sensitive-topic guardrails, and escalation rules. Whoever builds it first, he warns, will inherit the agentic stack, and if marketing gets there first, comms will be stuck with a system never designed for crisis, controversy, or stakeholder complexity. The episode also includes some concrete examples and early thoughts on Hermes, Wispr Flow, and where human judgment still has to win.
Links from this episode:
- Redesigning Your Marketing Organization for the Agentic Age
- The Year of the Agent: What it means for the future of communications
- Google Summary: The Year of the Agent: What it means for the future of communications
- If you work in PR and you’re unsure how AI agents will help you, this should help.
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel: Hi, everybody, and welcome to episode number 513 of For Immediate Release. I’m Shel Holtz.
Neville: I’m Neville Hobson. Over the past couple of years, we’ve heard countless conversations about how AI is changing marketing and communication. Most of those discussions tend to focus on tools — faster content creation, better personalization, workflow automation, synthetic media, analytics — all the things AI can supposedly do more quickly and at greater scale than humans. A new article in Harvard Business Review published last week takes the discussion somewhere much bigger.
Its argument is not simply that AI will improve marketing productivity. Its argument is that AI may fundamentally redesign how marketing organizations themselves operate. The article is called “Redesigning Your Marketing Organization for the Agentic Age,” and the authors argue that most marketing teams are structurally unprepared for the speed and scale AI now enables. The reasoning is interesting; we’ll look into this in a minute.
AI has already accelerated software engineering and product development dramatically. Products, updates, campaigns, and features are being developed and shipped much faster than before. But marketing organizations, they argue, are still largely built around sequential workflows, siloed teams, approval chains, meetings, handoffs, and coordination-heavy processes. So even when AI speeds up individual tasks, the organization itself still moves slowly.
In other words, the bottleneck isn’t necessarily the technology, it’s the operating model. What struck me reading this article is that in many ways it feels like the continuation of conversations we’ve already been having on FIR over the past year. About a year ago, Shel demonstrated some of the early agentic AI capabilities we were beginning to see emerge — systems that could move beyond simple chatbot interactions and actually take actions across workflows, tools, and platforms.
At the time, it felt experimental, slightly futuristic, and maybe just a glimpse of where things might be heading. Since then, we’ve repeatedly returned to related themes on the podcast: AI as orchestration rather than just automation, and managers becoming directors of systems rather than supervisors of tasks, to name but two. Recently, the wider communications industry has been framing 2026 as the year of the agent, a fundamental shift from generative AI, which creates content based on prompts, to agentic AI, which acts autonomously to achieve long-term goals. The rise of such autonomous agents requires a focus on agentic orchestration, with professionals acting as AI engineers who guide, manage, and audit these digital employees. As we discussed on this podcast last year, communication departments will adopt a hybrid structure where humans focus on high-level strategy and creativity while AI agents handle high-volume procedural communication tasks at machine speed.
We’re already seeing a marked impact on marketing and public relations. The Harvard piece explains how companies such as HubSpot and AWS have begun putting this model into practice. They say organizations are achieving measurable gains, with marketing materials adapted up to 98 times faster, unit costs reduced by 80%, and click-through rates increased up to 17 times. Research from BCG has demonstrated these benefits at scale.
Organizations embedding agentic AI into marketing workflows, the research has found, can achieve up to a threefold increase in ROI, campaign speed, and content volume. That’s why this Harvard article feels so interesting to me. It doesn’t contradict any earlier conversations; it complements them. It takes many of the ideas we’ve been discussing conceptually and places them inside a concrete organizational model. The authors propose something they call an agentic marketing organization — essentially a system where humans and AI agents work together continuously across multiple layers of activity.
At the center of this idea is what they describe as a brand code: a machine-readable knowledge system containing brand strategy, customer insights, messaging frameworks, business rules, governance structures, and operational guidance that both people and AI systems can understand and act upon. Once that foundation exists, specialized AI agents can continuously create, adapt, test, distribute, optimize, and report on marketing activity in real time. It’s a vision of marketing that starts to look less like a department and more like an operating system.
But what really caught my attention wasn’t the technology itself so much; it was the shift in the role of the marketer. Because beneath all the platform architecture and workflow diagrams is a much deeper question: if AI increasingly handles execution, what becomes the real value of marketers and communicators?
The article argues that value shifts away from production and toward judgment — setting intent, evaluating outputs, interpreting signals, shaping governance, and guiding how the system evolves. And that raises some fascinating questions for communicators. But first, Shel, your demo of those early agentic capabilities was about a year ago now. As I mentioned earlier, it felt experimental and slightly futuristic then. So what’s changed since then?
Shel: It feels like ancient history now. If I were to look at that, I’d probably shake my head and say, “my God, that’s pretty primitive.” The way it worked was, it took a screenshot of every site it visited and then acted on the screenshot. So it was a very slow and tedious process. The video that I shared, I edited out all of the waiting time for it to go through all of this, because it showed you everything. And those days are long gone.
That was clearly a demo. I don’t remember which of the AI models offered that — I think it was Anthropic — but it was just tedious and not all that functional. It did what it was supposed to do in the end, which was to create a spreadsheet with the information I’d asked for. It was some open-source spreadsheet that it used.
I ran a similar exercise just last week using Claude Cowork. And this was for a piece somebody in our sustainability department wrote. It was about two projects that had achieved world-first certifications for zero waste, which is kind of a big deal in the construction industry. It’s one of the biggest contributors to landfills and the like, the industry is.
So I’m looking to place this article. And what I did was, I told Claude Cowork that I wanted four subagents working: one to look at construction and AEC publications — that’s architecture, engineering, and construction; AEC is the category for the industry. Another one was going to look at sustainability publications. And there was one other, but I also had it look for podcasts where the authors of this report might be invited for an interview.
I said, what I want you to do is find the publications and podcasts based on their previous content that are most likely to be interested in something like this, and then create a spreadsheet with the name of the outlet. And of course, divide it into these categories — right? AEC, podcasts, sustainability-focused publications, and the like. Mainstream media was the other category. But I also wanted the URL, I wanted the name of the appropriate person to pitch the article to. And then, based on what that person has written — that particular reporter or editor — I wanted a pitch that was personalized to that person.
And I came back in about half an hour, and there was a spreadsheet ready to go. And I had started acting on it. I don’t copy and paste the pitches; I go and take a look at that reporter’s writing and review the pitch and then make some tweaks to it. But my God, can you imagine how much time that would have taken for me to go out and do this on my own by way of research? That would have been hours and hours.
And instead the agents went out and did it, and then Cowork assembled all that information into a spreadsheet. I was doing other stuff while it was doing that. I wasn’t sitting and watching, because there frankly wasn’t that much to watch. I mean, you could watch the agent tell you, “now I’m going to go look at this.” But, you know, that’s kind of boring. Let it do its thing.
Neville: Yeah. So a question I have related to this, I suppose, is to put it into one practical area, which is: people might think of this in the context of the interaction you have with prompts and the old-fashioned way of doing things that is still prevalent. So how did you — the agents went off and did their thing, and then you came across what they produced and so forth, and it saved tons of time — how did you gain confidence, let’s say, that it was accurate, that there were no hallucinations, no errors? Or is that not the issue anymore with this kind of development?
Shel: I believe that hallucinations would still be an issue. It’s still a model at some level doing this work. I mean, it’s Claude with Claude Cowork. I did install Hermes over the weekend. We’ll talk about that in a bit, but it’s an agent platform, an agent framework, and you create the agents to do things.
For example, I created one over the weekend that I set up to be a weekly job, and it’s going to go out and look at construction industry news to find things based on our areas of expertise where I work, where we have subject matter experts and thought leaders, to find the top three articles that are ripe for newsjacking. If you remember David Meerman Scott’s newsjacking — things where we can get some stuff out there quickly.
Neville: Yeah.
Shel: And take advantage of the fact that this is something that people are looking at and gain some traction over it. So every Monday at eight, it’s going to run this job, and by 8:30, 8:45, it’s going to give me the results. And all of this is through Telegram, or WhatsApp, or whatever app you choose to use to interact with the bot. It still starts with a prompt. The difference is that you’re not prompting a question in order to get an answer; you are telling it what task to perform.
And in the case of the one that I set up on Hermes, it’s now a weekly task. And the interesting thing about Hermes is that it learns as it goes. It continually self-improves based on the more it knows about you and the kinds of tasks that you’re asking it to perform. So I’m looking forward to seeing how that goes. But so far, I just have the one agent running there. But it’s still a prompt at the end of the day.
And in fact, I used — I think it was Gemini — to help me craft the prompt to get the best results I could. I said, here’s the list of requirements, turn it into the best prompt that Hermes will understand and act on most effectively. And it did that. It did a great job. And I’m very satisfied with the results so far. I ran one test of it, so I liked it.
Neville: Yeah. So Claude Cowork is kind of at the heart of this. I’m experimenting myself with Claude Cowork — with Claude generally, Cowork sort of. Nothing like you’re doing with this, I hasten to add. But one of the things that I’m very impressed with about Claude is the way in which you tell it the things about you, who you are and what you’re doing, all this stuff — your preferences in how it conducts what you’re asking it to do — in a way that, unlike ChatGPT for instance, where you have to, in a sense, include in a prompt stuff you’ve already told it for something previously, but you’ve got to do that again. It doesn’t kind of remember that in the same way. Claude is different, though.
So your setup — I mean, I guess what I’m asking basically is, when you set this up, did it require that level of preparation that is probably desirable to do that? Or was there anything special that you had to do that was outside of what you would normally do with Claude Cowork?
Shel: Well, for the byline piece that I was looking to pitch, that I set the subagents out to do their thing in Cowork, I did in the prompt explain what my goal was and what the organization was. I had it look at our company website to get a good sense of who we are and what our areas of specialization are. I gave it some additional information.
But then something I do with all of these now — not every prompt, if I’m just in Claude or ChatGPT, but especially with the agents, with deep research projects and things like that — I’ll say, “ask me questions before you go out and do this.” And it usually asks some very salient questions. It’s very good at deducing what it doesn’t know. And the answers factor into the results you get, which is really interesting to me — that it can, if you ask it to, understand where there are gaps in the prompt that it could use this information in order to deliver really excellent and pertinent results.
Neville: Got it. So thinking about our listeners listening to this, to how you’ve explained all of this — is it kind of credible and within the reach of anyone literally wanting to do this? Or do you need to have some kind of mental preparedness or knowledge technically to do this? Could anyone just dive in and start something? Right.
Shel: Well, I don’t know about diving in. With Hermes, for example, I watched a couple of YouTube videos. I watched one that actually walked me step by step through the installation process and then had a whole section on use cases. I’ve watched more. There’s one on 99 use cases for Hermes that I watched, which was pretty good. So it helps you get in that mindset. But in terms of, can anybody do this?
In the world of communications, anybody better be able to do this, because you’re not going to be sent out to look for these sites and assemble a spreadsheet anymore. You need to be able to orchestrate these agents. And that means knowing how to prompt it to get the results that you want. And that’s different, again, from prompting ChatGPT for an answer to a question, right? You are giving it a task, and it could be a recurring task that somebody on your team does.
Now, in communications, I still don’t see this replacing a communicator, because every communicator is going to have the human-only or human-required elements of the job. I cannot see one of these conducting, say, an employee focus group. There’s so much that we do. I mean, you know, in public relations, the word “relations” always stands out to me, and maintaining those relations is not something a bot can do.
But in terms of what that Harvard Business Review article was talking about, you can swap marketing for communications. I think it’s more true in comms. Comms workflows are more coordination-heavy than marketing. We have legal, we have HR, we have the C-suite. We have to make sure everything’s consistent with the brand and maybe get some brand representation approvals. They’re the owners of the channels that we have to deal with.
If marketing needs a brand code — and this was a concept I really liked in that article — communications needs a narrative code. You know, a machine-readable positioning, machine-readable executive voice profiles, message hierarchies, sensitive-topic guardrails, rules for escalating things that emerge that need to be taken up a step in the hierarchy or maybe up to the C-suite or the CEO. I don’t know anybody who’s built a narrative code.
Whoever builds this first in your organization, by the way, is going to end up owning the agentic stack. If marketing builds it first, we in communications are going to inherit a system that wasn’t designed for crisis communication, wasn’t designed for controversy or reputation damage or stakeholder complexity — it was built for marketing. And that’s the one we’re going to end up having to work with. You probably remember, Neville, in the early days of social media, Richard Edelman was out there sounding the drum that PR needed to own social media before marketing and advertising got their hands on it, because they would turn it into something inauthentic, right? It’s the same thing here.
Neville: Yeah. Yeah.
Shel: I think we in comms are going to have to build out the narrative code and let marketing take advantage of the agentic stack that we’ve built. But we need to be in the room when those decisions are being made.
Neville: So another challenge for communicators, and I can see that. I think the overall structure of the Harvard piece, as I mentioned in the introduction, is on the organization as a whole. And I think there are examples where that’s in work — I quoted a couple, and then there’s the BCG research, which I found quite interesting. But that’s… restructuring is a way away yet on an organizational level, I would say, for most companies. But the individual actions, such as experimentation you’re doing, are definitely right in front of us, literally right now.
And it prompted a thought in my mind, looking at this overall picture, about some assumptions in the Harvard piece that I think are worth looking at for a minute, where the article assumes that strategic judgment remains human, not AI focused, but execution becomes agentic. So I think, okay, then — though history suggests automation rarely stops neatly where people would like it to and where they would expect it to.
So perhaps a question that’s relevant to address in this context is: if AI systems — agentic is part of that — increasingly assist with strategy too, which is what they’ll be doing, where exactly does human value migrate to? That’s a broad question, but for communicators specifically, how would we address that one?
Shel: I think, first of all, if you’re going to look to the agentic system to assist with the development of the strategy, I would sit down and map out a game plan for that. I wouldn’t just say, “hey, you know the company I work for, come up with a strategy for us.” I would say, first of all, what is this strategy…
Neville: Ha ha ha.
Shel: …going to be designed to achieve? What do we know about the direction the company’s going and decisions that have been made? I would certainly use it to go out and say, research the marketplace and research our competitors and identify, to the extent that you can, what their strategies are. I would develop the strategy myself, but I would give it to the AI to stress-test.
And by the way, some of this is agentic and some of it is just querying a chatbot. I mean, let’s just take crisis communication as an example. No CEO is going to go into a boardroom with an answer from an AI system telling a leader something they don’t want to hear. That is amplified by the agentic stack. If we go in as the crisis counselor and say, “look, I know you’re not going to like this. Here’s my judgment. And I’ve got this information that came from the weekly analysis of sentiment in the marketplace,” so I think it can bolster your argument. It can’t replace your argument. You’re going to walk into that boardroom as a human and make a case.
Same thing, maybe, with focus groups. When passive signals in social media, for example, and message boards get gamed, sitting in a room with 10 employees becomes the truth that the dashboards that are out there — the agents that are out there looking at sentiment — get checked against. So when a dashboard says that morale is great and the focus group says it isn’t, I’m going to pay attention to the focus group. I’m going to pay attention to those 12 people in the room before I listen to an agent that says, “well, we’ve been analyzing all the sentiment in Slack and email, and everything is just dandy.”
So I think it’s the same with strategy. I think I would never abdicate strategy…
Neville: Mm.
Shel: …but I could certainly develop it faster and be more confident in its viability by using agents and chatbots.
Neville: Yeah, I agree. And it makes me think of, I guess I would say, what’s coming, which is already here in ways that lead to even greater — well, integration, I suppose, is the right way. I’m thinking what you said at the beginning of this segment we’re talking about now, which is, you don’t hand the whole thing over to the AI and say, “hey, go and develop a strategy.” You would do…
Shel: And you know there are people who are, right?
Neville: Yeah, they will. They will. But it seems to me that this is really, in a sense, the fulfillment of an expectation — a promise — from artificial intelligence tools like this, that you would have a conversation with it in the same way you would with a human being who might be an external consultant or a colleague who’s a subject matter expert or whoever it might be, that you would explore with that individual: we’re developing a strategy for next year, let’s look at how we’re going to do this.
You set the framework for how you might start that conversation with your AI assistant. And as you said, this is not specifically agentic; it’s the whole spectrum of what the tools are. And you set it on course to go and research this. And that’s probably what an agentic tool will do. And that to me is the excitement of where this is going — that you can get to that stage, which then I think would address some of the skepticism and indeed alarm bells by some in organizations when they see unfettered technology going all over the place or being asked to do stuff. This, though, makes it credible and gives it some legs of credibility.
Which leads me, I guess, to possibly the final question here. We’re seeing this, as you’ve explained, this is light years ahead of the demo you gave a year ago, which gave a signal, a strong sense of what’s possible, where this could go. We’ve seen that fulfilled. It is eminently possible. And you don’t need to be a rocket scientist, as you might have expected you would have to be a year ago. This is doable. And the more people experiment with it in simple ways, like you’ve outlined as a real-world example, they will want to do that in that case.
So the question then, therefore, is: okay, fine, a year on from last year, you’ve explained something you’re doing that delivers value quite readily every Monday morning, let’s say. So what’s next, do you see, in terms of developing technology and the developing value people will get from it that would accelerate probably its uptake? How do you see it?
Shel: I think that the next thing we’re going to see is an evaluation of every role and where an agent will fit. This is something we went through a couple of years ago. Ethan Mollick was talking about it in his book, Co-Intelligence, before we were even talking about agents — talking about inviting AI to the table and figuring out where you could work it into your workflows. But it was still the chatbot. It was still the, “I’m going to ask you a question and you’re going to deliver some kind of answer.”
I think we need to do that again and look at agents. What tasks are we performing, and which ones can we hand off to agents? And I think there are probably roles where this is going to be even easier to do, where you’re going to see more opportunities than in communications. I mean, you know, engineering, for example, I think is wide open for this sort of thing.
So I think that’s what’s next — as we do hand off certain (and I’m going to call them) mundane tasks, because this is not the high-level strategy and the human-touch stuff that is so important in so many jobs. But as we hand these off, and it now takes an hour instead of a week, what does that do to the rest of our workflows? What does that do to our organizational structure?
One of the things that I was reading over the weekend was the expectation that middle managers are going to be a thing of the past, because what do they do? They handle the flow of information up and down between the people who report to them and the people that they report to. They handle a lot of mundane tasks that might now be handed off to an agent. Agents, according to — I don’t remember who this was who was saying this. It was somebody noteworthy. It might’ve been Dario Amodei at Anthropic, but I honestly don’t remember for sure — but middle managers can be replaced by agents by and large.
So what does that do to organizational structure? Certainly flattens it. But now, in terms of those executives who have a lot of people reporting to them, what part of that reporting structure can be handed off to an agent? So I think this is sort of a cascading situation where everything we do leads to a reconsideration of something, that leads to, well, what else can we do with the agents, which leads to further reconfiguration?
I think that’s what we’re looking at. And I don’t think it’s going to happen overnight, because, as you alluded to, the technology may be moving fast, but organizations tend not to, particularly when it comes to issues of structure and governance.
Neville: I think this is so exciting, to be frank — the idea of the changes we can see coming that will be painful for many. But is it more structural change? It’s a constant in our lives, is it not, with all of this? Something we should embrace emotionally and logically, that we can control this. And I don’t mean control the tech — we can’t do that. But we can control the risk and the benefits of something like this by not reacting to something that’s coming, by, in a sense, embracing it and experimenting with this and learning it. And as you said, if we don’t do this, the marketing guys will. And so we can’t have that. I think…
Shel: And then we’re stuck with theirs.
Neville: I think it’s something to really pay attention to. So this has been a useful, interesting discussion, Shel, getting your thoughts on this in particular. So yeah, I think we’ll come back to this conversation unquestionably at some point in the future.
Shel: No doubt, as we see developments. In fact, as I say, I just started working with Hermes over the weekend, and it was an eye-opener, and I expect, as I work with it more, I’ll have more thoughts about it and my thinking will evolve. I should point out that I did install this on a personal virtual server, not on a company computer. I’m not taking that kind of risk. And it’s my personal account.
One other thing I thought I’d mention — you talked about the idea of having a conversation with the AI, and I think that’s becoming more of a focus. And I’ll give you two quick examples. One I already mentioned is with Hermes: you don’t go to a terminal and engage with it or go to its website. You do this through WhatsApp or Slack or, in my case, I’m using Telegram — just like I’d be having a conversation with a person in that same app.
But on, I think it was Thursday, I did a half-day webinar that was offered by the Marketing AI Institute, Paul Roetzer’s organization, and it was on AI for writing. And it was very interesting. Chris Penn was among the speakers; he did a great job, as always. But one of the folks there talked about, you know, have the conversation with AI for real — do it with your voice, not with your keyboard. And she talked about a tool, which I haven’t used it yet — I have installed it across my personal computer, my laptop, and my phone — called Wispr Flow. It’s an AI tool. Have you…? It’s pretty cool. I mean, in any tool you’re using, you just click it and talk. And it doesn’t go directly into the chat box; it interprets it…
Neville: Yeah, I’ve been using it. Yeah. Yeah.
Shel: …and then puts the best prompt based on what you just said into the box. And that’s what you use to prompt the model. And I’m looking forward to giving that a try. And it’s called Wispr Flow, by the way, because if you’re in the office in an open-space format and you don’t want to disturb the people next to you, it understands what you’re saying when you whisper to it.
Neville: Yeah, it is interesting. I’ve got a hurdle to jump with it, though, which is getting accustomed to speaking what I want things to be done and how, rather than typing them. You know… yeah, and I haven’t got across that hurdle yet. That’s limiting my use of it. So I’m reverting to the, well, I’m more comfortable typing, I can type fast and all that kind of stuff. But, reality, this is faster than that. And it is…
Shel: Yeah, same.
Neville: I recognize the benefits of it. I can see this. Not everyone will be used to this. This is not dissimilar to the argument we could have about voice notes. I know people who love voice notes; I don’t. And I know more people who don’t like it. It could be a generational thing, I think to myself. But it’s part of the communication landscape. So you need to get accustomed to these developments.
Shel: Yeah. And I hear about voice notes being preferred by some reporters who are being pitched, because it’s evidence that it wasn’t AI slop that’s pitching them.
Neville: Yeah, yeah, yeah. Yep, yep, yep.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #513: Why Communications Must Build the Narrative Code for the Agentic Age appeared first on FIR Podcast Network.
11 May 2026, 7:41 pm - 31 minutes 28 secondsFIR #512: The AI Shift in Executive Decision-Making
While there’s no evidence that business leaders are outsourcing the most important decisions to AI, there are reports that many executives are relying on AI to make many — in fact, most — of their decisions. The implications for communications could be huge.
Links from this episode:
- AI Is Changing More Than Work, It’s Rewiring Executive Decision-Making
- Inside the C-suite: How AI is quietly reshaping executive decisions
- AI and the future of human decision making
- C-Suite Executives Dominate AI Decision-Making as Strategy Becomes Priority
- Decision-Making by Consensus Doesn’t Work in the AI Era
- How AI Is Transforming the Way Executives Lead
- Leadership at a Turning Point: How AI Is Shaping Executive Decision-Making
- Can AI Make Executive Decisions?
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everybody, and welcome to episode 512 of For Immediate Release. I’m Neville Hobson.
Shel: And I’m Shel Holtz. The inspiration for this week’s report came from a post Brian Solis wrote recently. In it, he argued that AI isn’t just changing work — it’s rewiring how executives make decisions. Once Brian put that in my head, the trend started standing out in other things I was seeing. I’ll summarize the numbers and what they mean for communicators right after this.
The numbers Brian pulled together are honestly alarming. A Confluent study of UK private sector leaders found that 62% of executives now use AI to make the majority of their decisions. That’s not some — it’s the majority. 70% say they second-guess themselves when AI disagrees with them, and 46% say they rely on AI more than their own colleagues.
On the U.S. side, SAP’s research found that 44% of C-suite executives would reverse a decision they had already planned to make based on AI input. 74% place more confidence in AI advice than in the advice they get from family and friends. Meanwhile, McKinsey reports that 92% of companies plan to increase their AI investment over the next three years, but only 1% — 1 percent — describe themselves as mature in deployment. The money to pay for AI and a sort of blind trust in its abilities are racing ahead of the internal competence to use it. Now, I want to be clear before I go on. I’m not anti-AI, Neville — you know this. Anyone who listens to the show knows I’ve been beating the drum for AI as a tool for communicators and for business in general for a long time.
AI as a thinking partner, a research assistant, a stress-tester for ideas — that’s enormously valuable. But there’s a meaningful difference between using AI to inform a decision and using AI to make the decision. And Brian puts this well: AI is becoming the new executive influencer. The problem is that it hasn’t earned that role, at least not yet.
So let’s talk about what this means for those of us in communication, because the implications are everywhere. Start with employee trust. The implicit deal between an organization and its workforce is that the people at the top got there because they have judgment and experience and pattern recognition that the rest of us don’t have — or at least they’ve been able to employ it really well and get noticed by the people who promote you into those leadership decisions.
That’s the story leadership tells, and it’s the story employees buy into. Now imagine the all-hands where the CEO announces a major restructuring, and somewhere in the Q&A, or worse, on Blind or Reddit a week later, it comes out that the decision was essentially handed to a chatbot. What happens to confidence in leadership? What happens to engagement? What happens to the social contract that says, follow me because I know where we’re going?
You can’t credibly ask people to bring their full selves to work, as they say, while you’re outsourcing your own judgment to a language model. Now extend that to external stakeholders — investors, customers, regulators, the board. They’re paying, and in a lot of cases they’re paying a lot, for executive judgment. If a strategic call goes sideways — and you know that happens — the explanation that the AI suggested it isn’t going to land well.
It’s going to sound like an abdication, because it is an abdication. And from a crisis communication standpoint, “we trusted the algorithm” is one of the worst defenses I can imagine. I don’t expect that anybody’s going to say that, but it doesn’t mean it’s not going to come out. Just ask anyone who’s worked an aviation incident, a financial services failure, or a healthcare AI misfire. Imagine the reaction when either the leader tells people, or they learn through a third party, that the afflicted stakeholder hears, “Well, that’s the decision the AI told me to make.”
And there’s a third implication that I think communicators need to surface inside our organizations: the erosion of dissent. I find this particularly interesting and disturbing.
Confluent found that 65% of leaders say decision-making has become less collaborative since adopting AI. The Harvard Business Review just ran a piece arguing that consensus is dead in the AI era. That may be — but debate isn’t consensus. Debate is the friction that exposes bad assumptions. It’s what didn’t happen at that auto manufacturer — I think it was Volkswagen with their emissions standards. They didn’t have the psychological safety to feel safe in dissenting against the decisions being made. In this case, we’re not even looking forward at the leadership level in some cases. If AI is pushing aside the colleague who would have pushed back, whatever process your organization had for dissent just stops functioning. And when dissent dies, so does the early warning system communicators rely on to spot reputational risks before they get out of control.
So what do we do? A few things. We push for governance — and if you already have a governance model, push to revisit it. Your governance needs clear declarations of which decisions AI informs versus which ones it actually makes. We coach our executives to talk publicly about how they actually use AI, with appropriate humility, before the question gets asked for them.
We build the internal narrative that human accountability is non-negotiable, no matter how good the model gets. And we keep reminding leadership that machine confidence isn’t the same as strategic clarity. Brian’s right: AI is a test of leadership. It’s also, increasingly, a test of communication. Neville?
Neville: Well, just to set my position clear on this, too — I’ve been a drum-beater for AI as a research assistant, as a useful tool, since GPT first came out. The initial kind of hysterical enthusiasm was tempered over time, but I use the tool every single day in what I do for work, or for pleasure for that matter. So it’s something I believe strongly in. But I’ve got this, how could you say, in the back of my mind always — this thought that I don’t accept blindly anything the AI assistant tells me. If I’m researching something, for instance, I’m going to make a recommendation about something, let’s say, or I’m writing a report or even something relatively simple like an article for the blog. If I felt I wanted to say this and it’s telling me that, that’s a simple decision: I’m either going to follow it or not. Typically when that happens, I’ll ask it questions to further that angle. But this is something else, what Brian writes about. And The Register — I’ve read their piece — tempered with a bit of hysteria, it seems. I mean, this is a very alarmist piece, or argument, you could say. If it’s saying, as it is — the survey that The Register reports on — 62% of leaders of private sector companies, and according to The Register that’s owners, founders, CEOs, managing directors, the C-level leaders of various types of companies. They didn’t say sizes. But they use AI to make the majority of the decisions, which leads to some of the alarm bells ringing that you outlined. What if it gets out that the AI made a decision when something goes south? You could flip that. What happens if it gets out that an amazing decision that led to the company being massively successful was actually made by an AI?
I think it’s inevitable you’d have that sort of focus on it alongside more sane arguments, perhaps. You could argue, well, that CEO is pretty smart that he used an AI to help him do that — as opposed to the other side, which is, gee, we’ve got to fire this guy, he used an AI and it went wrong. So you’ve got to put some balance there. Also, I think you mentioned this earlier, and I agree with you, that there are two angles to every question we might ask about this. One is internal, within an organization, and the other is external. So it is an interesting point. And one thought I had in my mind, the pragmatic question: if a leader changes a decision he or she has made because the AI assistant suggests something different, who actually owns that decision in the end? In fact, whether he changes his mind or not, if the AI said, “I recommend you should do this, and here are the 10 reasons to support that idea,” that are different from what the leader was going to do, and he or she made the changed decision based on that — who actually owns that decision? Or, as I asked myself, is that really the most important question to be answered? But it’s still a natural one to arise. And yes, we could run through a long list of the implications in this scenario for the employees of the organization, other stakeholders, and the external audiences. But I have to say Brian’s arguments are well made. He sets the scene — the executives are relying heavily on AI. From there it goes more into the alarm function.
Judgment being reshaped — the judgment exercised by a leader is obviously so flaky that it can be reshaped by the AI assistant. In other words, that individual is willing to let that happen. I wonder whether this is all part of, perhaps, the speed with which people are expecting decisions to be made. Indeed, something I was doing this weekend — we’re on a holiday weekend here, by the way, so I had time to do this — that was nothing to do with work. It was a personal thing I was involved with that required analyzing a document that had a lot of financial information in it. I asked my AI assistant, in this case Claude, as part of my experiment with Claude, to summarize it and pinpoint the key aspects. It did that in about 20 seconds. And that was enough for me to know what questions I would need to ask it next, to develop it the way I want, rather than starting from scratch trying to do that. So there’s the benefit. But I think treating AI like a trusted advisor, to me, makes a lot of sense. And I’m trying to balance that thought with the alarmist approach — you know, this is a bad thing, all these terrible things are going to happen, and it will all come out. So how does that gel with treating AI like a trusted advisor? Although your point, I agree, it hasn’t earned the trust in the context of this conversation. So does it mean leaders are willing to override their own decisions or instincts based on AI input? Well, according to The Register, 62% have said they are, I suppose. If that’s true, I think we’re in trouble already, before this gets any further. So the real challenge — I think you’ll agree with this, Shel — is not the tech at all. It’s the leadership aspect, the human behavioral aspect of this, as is so often the case. When people talk about the relationship between the human and the AI and they just talk about the tech, it’s not — it’s a human issue. Cut through all the alarm bells and pluck out something which to me is extremely important, that really doesn’t get much airtime in Brian’s report at least: isn’t this really about the whole point of judgment? That someone in a leadership position in an organization is in that position partly because he or she is very good at exercising judgment in the work they do or the decisions they make. Are we saying that judgment is so fragile that an AI could just overturn all of that in an instant and lead all this? I guess my point is that I’m noting this. I listened to what you said. I haven’t read all the surveys you mentioned, or the other reports — the Harvard Business Review, for instance — I will. But I find this literally the worst-case scenario, and that’s being pitched as, you know, this is upon us, based on The Register, which, by the way, has a — let’s call it interesting — reputation over the years for some of their reporting. But this is very factual; their own report is actually quite well written. So what do we make from this then? Should we be worried? I don’t think we should, if we see this as simply something to note and look at as a communicator — let’s say the role you’ve got in ensuring that the CEO isn’t going to have his or her judgment completely overwhelmed by an AI. I just find the idea of that frankly ridiculous, in the sense of, well, not implying or even saying that this is the norm. It’s a result of surveys. There’s other research also supporting some of this, I think. But we should put it in perspective: this is, I guess, an inevitable discussion point that’s emerging at this stage in the development of AI and organizations. We’ve reported recently on this podcast how leaders are taking ownership of the AI deployments in their organizations. That doesn’t mean to say every company is doing this, because they aren’t. But we’re seeing that, and then we’re seeing other reporting we’ve commented on — that employees and other stakeholders related to an organization are unhappy with what’s happening with AI rollouts in their organization. So you’ve got all these mixed messages coming left, right and center, and now this. It doesn’t mean we should — oh my goodness — stop doing this, or have a meeting with the CEO and say, “What are you doing?” No, I don’t think so. But we need to note this nevertheless. I don’t believe this is something we should all get terribly alarmed about, to be honest, as long as we apply our own common sense to observing what’s going on and making sure we understand the CEO we’re supporting as communicators — let’s say the leadership teams — that this isn’t happening.
Shel: Well, I don’t think this is the most important issue we’re facing with AI. I do think it’s a time to worry. Now, I will say I don’t imagine that the CEOs leading the world’s biggest companies — the Jamie Dimons, the Josh Domaros, the Tim Cooks of the world — are using AI to make important decisions. And you have to wonder, because I don’t think they asked, in the survey they did, what types of decisions these CEOs are making. Are they the game-changing decisions, the most important decisions they have to make, or are they lower-level decisions? We talk about AI taking all that drudge work off the table. Are they allowing the AI to make decisions associated with that kind of work? But I think, as people — and CEOs are people — as they get accustomed to letting AI make decisions, it might get easier and easier to turn bigger and bigger decisions over to AI as time goes by. With any luck, AI is going to get better and better and may earn that trust. But this would cause that decision-making instinct that leaders have, based on their experience and their judgment and the other things that got them to that level, to atrophy. I mean, atrophy is happening elsewhere as a result of AI among some groups of people — the ability to write your own thoughts down, to craft your own email, to conduct your own research. As far as CEOs making good decisions with support from AI, I think support from AI is going to become table stakes. I think CEOs who don’t know how to use it are going to become dinosaurs in fairly short order — not necessarily the ones who have the job now, but I don’t think you’re going to see people getting promoted into that position, or hired into it, if they don’t know how to use AI for decision support and the other things we see AI being used for very effectively at leadership levels. And leaders are using AI, according to most of the research I see. I wonder, though, if they start turning more and more decisions over to AI, what is the board or the owner going to see as the value of the CEO? If most of this work — or much of this work, the majority according to that Confluent study — is being done by AI, does that mean the enormous salaries being paid to the people at the top of the organization are going to decline? Or does it mean that the role changes altogether, or maybe even ceases to exist in favor of some other model? And by the way, I’d love to see the same question posed to people at other levels of the organization, because this probably is not something confined to the C-suite, this turning decisions over to AI. I wonder how much it’s happening in middle management. I wonder how much it’s happening among frontline workers. If it’s at the same level, then it’s a company-wide issue that needs to be addressed, because there are going to be some problems that emerge if we don’t — I mean, along the Volkswagen lines with their emissions scandal. Dieselgate, exactly. Yeah.
Neville: That was Dieselgate, as it was dubbed. I mean, it’s a good point you make. I agree. And the point you made earlier, too, is actually a critical question: what kind of decisions are we talking about here? Is it on the scale of, let’s proceed with the merger with this company rather than that one? Or is it something like, should I fit in a stopover in this city on my way to that city to meet with these people and so forth and achieve these things? Is it that? Or is it even something more prosaic? You know, what do I get my wife for her birthday next week? I’ll have my secretary do it — but the AI could tell me. I mean, that’s ridiculous, actually. But it’s significant to know what kinds of decisions we’re talking about, because I’ve not seen it referenced. It’s implying — and people are jumping, obviously, on this — that these are the kind of organization-affecting major decisions that are suddenly at risk because an AI is doing it. I find that ridiculous, to be honest. So we need to know what kind of decisions.
Shel: Yeah. I mean, in my industry, there’s a go/no-go decision on pursuing a project. I cannot imagine, in my wildest imagination, in my organization, anybody turning that decision over to an AI. But what if somewhere in the industry they do, and end up pursuing a project that ends up being more trouble than it was worth? Somebody in the organization at that leadership level, who was involved in the previous discussions, would have known for various reasons, but the AI didn’t have the experience and the insight that that individual had. That could be a financial problem for the organization.
Neville: So the role of the communicator in all of this — and this is not to say that the communicator who works closely with the leadership teams, including the CEO and others in the C-suite, is involved in every single thing they’re doing. No, that’s not realistic, because they’re not. But the communicator’s role in preserving human judgment is the right question to ask. What is it in this context? Where do communicators fit in helping leaders balance AI insights with human insight and judgment and experience? Where do they fit in doing all of that? So the two angles I notice: internal comms — communicators act as sense-makers, ensuring context, ethics and human impact remain part of decision-making. Externally, they help articulate how AI is used responsibly in the organization, which is increasingly central to trust and reputation. That addresses the point you made about when it leaks and it gets out that AI did something. I think increasingly we’re going to see that point — articulating how AI is used responsibly in an organization — because the impact can be huge if rumor builds, which it would do: “the AI is making all the decisions in this company, and why do we need the CEO and all that?” So that’s a good role for a communicator to take on, and to be seen to be the person who is the “yes, but” person and the key advisor to leadership in these things, which strengthens the communicator’s role, in my view. So there are things we can do to address this. If this is as big a problem as these articles make out, I don’t believe it’s something we should lose any sleep over right now in the context of everything else that’s going on in the organization. But nevertheless, we’ve got respected sources — Harvard Business Review, we’ve got Deloitte talking about it, and others that we pay attention to because they’re credible publications talking about this.
Shel: Well, yeah.
Neville: Brian seeded an interesting discussion point, it seems to me.
Shel: Yeah. And let’s look at a very plausible scenario. Let’s say somebody sues the organization over a decision that the CEO made, or that leadership made, that affected them badly, and they feel they deserve compensation for that. In the U.S., anybody can sue anybody for anything. And we have seen some recent lawsuits. Look at the lawsuit that we’re seeing play out right now between OpenAI and Elon Musk.
Neville: Yeah.
Shel: And look at the records, the emails that have been surfaced in discovery. Look at the trials that have been held over lawsuits brought by the parents of children who killed themselves because they got encouragement or assistance from ChatGPT, and who sued OpenAI over that. What they got in discovery was access to the kids’ entire ChatGPT history. So you have a shareholder or a customer who sues the company, and in discovery, all of these things come to light — and that’s how it gets out. So I think even decision support has to be balanced with other input that you can demonstrate in a courtroom influenced the decision that was made, so it doesn’t look like the decision was completely outsourced to the AI. I think that’s an entirely plausible scenario in a lawsuit. So yeah, it’s something we need to consider. And as you say, and as I said, there are things communicators can do about this. One is making sure people are aware of the potential for this situation. And then, as I said, influencing the governance model so that it incorporates decision-making — if it doesn’t already have decision-making and decision support in the governance document, it needs to be added. And then making sure the leaders are talking about how they’re using it, so it never comes up that they’re using it to make a decision of importance in the organization — that it’s focused on using it in very effective ways.
Neville: Yeah. I mean, I think the picture you painted — lawsuits and stuff like that — are very possible, particularly in America, where, as you said, anyone can sue anyone for anything, usually for amazing sums of money, in the billions. So maybe what needs to happen in organizations that would address this, among other things, is keeping records. So that, for instance, in an organization that has deployed or rolled out AI tools such as chatbots — let’s say maybe their own version of something based on ChatGPT, whatever it might be — it needs to be known that those record anything you interact with on an AI. Whatever level you are in the organization, there’s a record kept along with anything else: emails, internal reports, you name it, they’re monitored and tracked in most organizations. And the fact that you could add to that picture even some of these automated note-takers, like Otter and others, that are commonly used in intrusive ways in Zoom meetings — and you hear stories of private Zoom meetings —
Shel: AI transcripts of Zoom meetings in which the decisions were made.
Neville: — where the outcomes are disclosed or leak out publicly because someone used one of these tools that summarized things, including the recommendations or suggestions if they were made by anyone. If that gets into a law case by the plaintiffs, that’ll be shown out of context — you can be sure of it. So, right.
Shel: Yeah. And that’s why a lot of organizations are saying to their employees, you can’t record these kinds of meetings.
Neville: Right. But someone will, and it’ll happen. So you need to head that off the path, as it were, and have your own structure in place and your communication surrounding it. So, for instance, you have to have very clear narratives around decision ownership, for example, that would help you in crisis situations. That’s the internal focus. Externally, you’ve got to communicate the kind of structure you have for human accountability — not “the algorithm said we should do this.” We can laugh about it, as I am at the moment, but imagine the reality of something like that happening. So I think these are all things that are plausible, I do believe, particularly in the U.S., I have to say — but hey, could be anywhere. It isn’t complicated to work out a plan of how you would prepare for things like this. But I’d rather look at it not as preparing for worst cases, although you need to. It’s just a switch — flip it over a bit and look at the benefits of all of this. And again, not solely the communicator: the individual leader has to be willing to go along with this, has to be willing to share some of the thinking he or she is doing and the discussions with the assistant, whether it’s an AI or anyone else, to realize that you can’t do this without full transparency, at least to your advisors, including the communicator.
Shel: Yeah, absolutely. And we will be back with a follow-up episode when the inevitable headline surfaces of a company that gets in trouble because it’s revealed that the CEO abdicated a decision to AI. Until then — actually until next week — that’ll be a 30 for For Immediate Release.
The post FIR #512: The AI Shift in Executive Decision-Making appeared first on FIR Podcast Network.
4 May 2026, 7:01 am - 1 hour 33 minutesFIR #511: Doing AI Governance Right and Still Getting It Wrong
The policies are clear and well communicated. The guardrails are firmly established. Every last employee has been trained. And someone in your organization still releases a public document riddled with AI-generated errors. What went wrong has nothing to do with technology and everything to do with internal culture and accountability. In this long-form April episode, Neville and Shel examine a company that seemingly took all the right steps yet still had to apologize publicly for a court filing riddled with hallucinated citations. Also in this episode:
- Gartner predicts that, by 2028, 75% of employees will rely on an internal chatbot to get the news that matters to them. How will internal communicators need to rethink their role to ensure everyone knows and understands what they should in order to achieve strategic alignment?
- One of the promises AI executives have made is a leveling of the playing field, giving lower-level employees the opportunity to excel and rise through the ranks. According to one new study, exactly the opposite has been happening.
- PR hacks have been accelerating the pace at which they churn out press releases and pitches. That has raised the bar for what it takes to earn a journalist’s trust (and journalists do still rely on press releases, according to a survey of reporters).
- Apple’s announcement of its CEO transition offers communicators a clinic on how to announce a new top executive.
- “Slopaganda” from Iran has proven remarkably effective, which means it is undoubtedly coming for your company or clients soon.
In his Tech Report, Dan York outlines big changes coming with WordPress’s next update.
Links from this episode:
- Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’
- Sullivan & Cromwell law firm apologizes for AI ‘hallucinations’ in court filing
- Letter re: In re Prince Global Holdings Limited, et al., No. 26-10769
- Sullivan & Cromwell Just Put Every Firm on Notice. And S&C Advises OpenAI on Safe AI Use.
- An AI Screw-Up By… Sullivan & Cromwell?
- LinkedIn search results for Sullivan & Cromwell AI
- AI, Trust, and the Reinvention of Corporate Communications: Inside Gartner’s 2026 Playbook
- Does your intranet still matter in an AI-first workplace?
- Chatbots in Internal Communications: Game-Changing Wins
- How AI Chatbots Are Redefining Internal Communications?
- The future of internal communication: How AI is changing the workplace
- High earners race ahead on AI as workplace divide widens
- Sarah O’Connor: One early view about AI was that it would share…
- How AI is forcing journalists and PR to work smarter, not louder
- What journalists want from AI-assisted PR pitches
- Journalists Trust Human-Written Pitches Over AI
- Journalists Reject AI-Generated Press Releases As Untrustworthy
- What communicators can learn from Apple’s CEO transition announcement
- Tim Cook to become Apple Executive Chairman; John Ternus to become Apple CEO
- Iran’s Meme War Against Trump Ushers In a Future of ‘Slopaganda’
- Iran’s ‘slopaganda’ team uses AI Legos to flood social media
- Slopaganda wars: how and why the US and Iran are flooding the zone with viral AI-generated noise
- Slopaganda Comes of Age
- Alberta separatist leader unconcerned about influence of YouTube ‘slopaganda’ videos
Links from Dan York’s Tech Report
- WordPress 7.0 Source of Truth – Gutenberg Times
- WordPress 7.0: Real-Time Collaboration Arrives in Core
- WordPress 7.0 Release Party Updated Schedule
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel: Hi everybody and welcome to episode number 511 of For Immediate Release. This is our long-form episode for April 2026. I’m Shel Holtz in Concord, California.
Neville: And I’m Neville Hobson, Somerset in England. We have six great stories to discuss and share with you this month and to delight and entertain you, we hope. Topics range from the consequences of not following company guidance on AI use, chat bots, employee use, and the workplace divide, using AI to work smarter, what we learned from Apple’s CEO transition announcement, and the future of slopaganda. Lovely word, that one, show. Plus, Dan York’s tech report.
But first, let’s begin with a recap of the episodes we’ve published over the past month and some listening comments. In the long form episode 506 for March, published on the 23rd of March, our lead story was on Anthropic’s view that AI will destroy the billable hour, a topic we’ve talked about before on FIR. We also explored digital monitoring of employee work, Gartner’s prediction that PR budgets will double next year, the escalating misinformation crisis, and Cloudflare’s prediction that
bot traffic will exceed human traffic by 2027. That’s next year, by the way. On LinkedIn, you’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write them. In FIR 507 on the 30th of March, we rejected roundly that idea and looked at the actual trends in using AI for writing. And that prompted some comments from listeners, right?
Shel: Yes, it did. Starting with Susan Gosselin, who’s actually with a client of mine back in my consulting days. She writes, there are many types of writing that I think AI is great for interpersonal communications, summaries, et cetera. But for marketing writing, that’s another thing. There are issues of copyright to consider and what you’re feeding into the channel.
This article from Jane Friedman, and she’s linked to it, and we’ll include that link in the show notes, is aimed at authors, but it does have implications for marketing writers too. For instance, I work for an American IT MSP, that’s a managed service provider. Let’s say that an MSP in Spain that does our line of work sees our website and our authoritative blogs and e-books and likes it. They decide to run our whole English website into Spanish using an AI translator.
then make a few tweaks and publish. There’s not a lot to stop them. There’s also the issue of being able to defend your copyright overall. The law is not yet fixed and the risks are real. Then Steve Lubetkin writes, I find AI particularly helpful for rote tasks like organizing lists, transforming Excel spreadsheet columns, and summarizing interview transcripts. It’s also great for brainstorming ideas when it suggests perspectives I hadn’t thought of.
but ultimately it comes down to using it as a tool for further human intervention, not less. Neville, you responded to that saying that’s a great way of putting it, Steve. Those rote tasks are exactly where AI seems to shine, the kind of work that takes time but doesn’t really benefit from deep human creativity. And I agree on brainstorming too. It could be surprisingly good at surfacing angles you might not have considered. I do this a lot.
Your last point really nails it though. It’s not about removing human input. It’s about focusing it where it matters most. Used that way, AI doesn’t diminish the work. It can actually elevate it. And finally, we have a comment from Yorma Mananan who writes, AI can help people escape from the writer’s block. So why not use it to get started?
However, writers must own all content created with or without AI so the content doesn’t sound like you, you shouldn’t publish it. The challenge is to learn to speak machine English with AI. Define clearly why you were writing, what you want to say, and what you want your readers to do after reading your content. Without your strategy, AI can’t produce quality content that sounds like you. Strategy first, AI second.
And Neville, responded to Yorma. You said, I like how you framed this using AI to get past the blank page is a very practical use case. That starting friction is real for a lot of people, and AI can lower the barrier quite effectively. Your point about ownership is key too. If it doesn’t sound like you, it isn’t really yours, regardless of how it was produced. Where I’d add a layer is around your machine English idea. I see it slightly differently. Rather than learning to speak machine,
I think the real shift is learning how to think with the machine, using it to clarify intent, test structure, and challenge assumptions. But I agree with your conclusion. Strategy first, AI second. Without that, you’re just generating words, not communicating. And Yorma responded to you saying, agree. Machine thinking is a better way of describing the conversation relationship with AI.
Neville: Good comment!
Great. It’s excellent to have that. interesting, Shell, that it illustrates to me something. It’s not a trend at all, but I’ve noticed recently in other posts I see on LinkedIn that address this kind of topic. Increasingly, there’s people leaving comments that are basically saying that you own it, not the AI. And AI assists you in communicating, not creating the final stuff, essentially, which is what some of these comments are alluded to.
Maybe people are waking up to that more than they have been in the past. It won’t silence the big critics. We’ve already seen that because, you know, it’s going to be criticized no matter what. But the more people who talk up the reality of what we all talk about, which is this is an assistant. It’s a tool to help you and communicate more effectively. It enhances your ability in that context. And then you’ve got Steve’s talking about, you know, doing stuff with Excel and all this kind of thing.
I’m in the middle of an experiment on the middle. I’m still at the 10 % start of experimenting with Claude Pro that I know you’ve been using for a long time, but I’m taking this very, very one step at a time. And it’s very non techie my focus. But one thing I have noticed is comparing as I have been doing, let’s say a prompt in the simplest form, a chat prompt to Claude compared to one to ChatGPT.
The differences are truly startling in many cases. Claude typically is richer and deeper in its content with the same prompts. Now, of course, there’s variables at play here. ChatGPT knows a huge amount about me. Claude does too, because there’s a nifty tool that imports everything from ChatGP to tell it. But I’ve also added stuff. So it’s done that. It’s missing on some levels, though. And that’s probably because it doesn’t yet know totally enough about me to do this.
This is something you notice when you do this kind of thing with different tools. So I mean, that’s not the main thing about Claude that wow’s me, I must admit. Cowork and some of the other tools I’ve been touching on, but Cowork I’ve spent quite a bit of time on. But so I’m sure we’re have lots more conversations about this as we talk topics. Let’s see what comes out of today’s menu of topics. So thanks for those comments, everyone.
Let’s see, bad, no, that’s not the right one, this one, when workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies are hiring gig workers to create content and train AI systems. This raises various communication and ethical issues. And in FIR 508 on the 8th of April, we explain what’s happening and discuss the implications.
Then when bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content. In FIR 509 on the 14th of April, we break down how this scam works, why it matters to communicators, and what you should be doing right now before an incident forces your hand. And you have some comments here.
Shel: We do two of them, one from Eric Redicop who identifies himself on LinkedIn as an entertainer and artist wrote that AI cannot use my work because it’s not posted online anywhere. I have to do it this way because YouTube allowed bogus copyright claims on my work and shut down my channel five times. then Ray Baron-Wolford, who is a CEO at a charity organization said, this is why it’s so important that every artist signs up.
to all copyright protection services.
Neville: Yeah, that’s a good point. I think the the first commenter though talked about a genuine issue, a genuine issue. And I’d wonder if, you know, if he’s saying that this can’t happen to me because none of my content’s online. I wouldn’t rely on that 100%. Actually, I wouldn’t. No, I wouldn’t. And that second comment.
Shel: Mm-hmm.
No. I mean, you have to be producing
the kind of content where you can have some success as an artist or an entertainer without having your content online.
Neville: Yeah.
Yeah, exactly. So the second commenter about signing up for every copyright protection that you can find is probably well, not probably it is a good idea, although I’m not sure that everyone would want to do that. And therein lies one of the issues about copyright. It depends on the jurisdiction. It’s a geographically based protection. Creative Commons is a good place to have established as a…
reserving your rights or some of your rights if you want to enable things to happen for others to use it. And that’s an international thing. So that’s a peace of mind, I would say. There hasn’t really been many, I’ve certainly not seen any legal court case tests since Adam Curry back in 2009 when he, I think it was in the Netherlands, he sued somebody who’d used a photo of his daughter.
and won the case. And it was not a Pyrrhic victory, but there was no you didn’t get any money out of it. But he got the legal ruling that these people had infringed on his copyright. But I’ve not seen any sense. So nevertheless, it’s worth doing. So yeah.
Shel: Yeah, it goes back to Susan Gosselin’s
comment, too, about any organization that does the same thing you do can take your content from your website, translate it into their language and publish it. And what do you do if your content is not copyrighted? There’s nothing that you can do.
Neville: Yeah, that’s incredible.
Reminds that reminds me shell back in the 2000s website scraping was it was a huge deal when when blogs suddenly came to the fore and you found that people were stealing all your content. I remember being in a lengthy email exchange with someone I think based in Romania or somewhere not a hope in hell. He was going to desist from doing that. Eventually it stopped though. So but I mean and that wasn’t from a copyright perspective. It was theft of content, which is well related, isn’t it?
So yeah, lots to learn from that. And finally, in FIR 510 on the 20th of April, we revisited the topic of shadow AI, the situation where employees ignore company approved AI tools and use their own preferred tools and not tell anyone. We discuss how one company approaches problem and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains. And now you’re up to date on FIR episodes.
Shel: Also want to let you know about Circle of Fellows. We’ve had a fascinating discussion just this past Thursday on Circle of Fellows. It’s part one of a two-part discussion, and it’s all based on this. If you’re watching the video, you can see it’s a new book by Diane Chase, former chair of IABC and a great communicator called The Seven C’s of the New Communication Compass. And what Diane did here was…
outlined these seven points and find words that started with C to label them. And she wrote one chapter and then basically had IABC fellows write the rest of the chapter. So Diane is the first non-fellow to appear on Circle of Fellows, but it’s her book, so it made sense. And in the first installment that we recorded on Thursday, Diane was joined
Joining the panel were, of course, me — I moderated the session, Jane Mitchell, Ginger Holman, and Brad Whitworth. Next month on the May episode of Circle of Fellows, Brad Whitworth will be the moderator, and I’ll be a panelist talking about the chapter I wrote about community, and I’ll be joined by Zora Artis and Cindy Schmieg, IABC fellows who wrote the other chapters.
It’s a really good book. I recommend it for communicators. But we talk about some of the issues around these chapters and Diane explains why these were the topics that she said in the new communication environment. These are, you know, your North Stars, as it were. So definitely worth giving a listen to this month’s Circle of Fellows, which you will find on the FIR Podcast Network.
at firpodcastnetwork.com. And we are going to take a short break now for a sponsor message. We will be back to dive into our six topics right after this.
Neville: Here’s a story that on the surface looks just like another example of AI going wrong. In mid April, one of the world’s most prestigious law firms, Sullivan and Cromwell, had to apologize to a US bankruptcy court after submitting a filing that contained multiple AI hallucinations, fabricated case citations, misquoted legal authorities, even references to cases that simply don’t exist. The errors weren’t minor.
They were significant enough that the firm had to send a formal letter to the judge, acknowledge what had happened, and submit a corrected version of the filing. And just to make it more uncomfortable, these mistakes weren’t caught internally. They were identified by the opposing legal team. Now, if you stop there, it’s easy to frame this as just another cautionary tale about AI, unreliable tools, hallucinations, the risk of automation in high stakes work. But that’s not the story here.
This firm didn’t lack guidance, quite the opposite in fact. They have formal policies governing the use of AI. They require lawyers to complete training before they can even access these tools. Their internal guidance explicitly warns about hallucinations and tells lawyers to verify everything before it goes anywhere near a client or a court. In fact, their own language is very clear, trust nothing and verify everything. And yet, in this case, those policies were not followed.
A document that should have been scrutinized at multiple levels made its way into a courtroom with fundamental inaccuracies baked into it. The failure here wasn’t the technology. It was a failure of process, behavior, and accountability. Human in the Loop only works if there is an actual human who is clearly responsible for checking the work, not in theory, nor in a policy document, but in practice, at the point where a decision is made to send something out into the world.
And what this case suggests is that in many organizations, that loop is more notional than real. If AI is being used to accelerate work, where are the safeguards that ensure quality isn’t being compromised in the process? And are those safeguards actually being followed or just assumed? Having a policy is one thing, embedding it into how people actually behave, especially under time pressure, is something else entirely. And I think that’s where this story really matters beyond the legal profession.
Behavior is moving faster than governance. People are experimenting, they’re finding shortcuts, they’re integrating these tools into their daily workflows, often quietly and informally. The risk isn’t only that AI gets something wrong, it’s also that humans stop checking as rigorously as they should, or assume that someone else had, or trust output that feels authoritative, even when it hasn’t been properly verified. So when we talk about responsible AI or human-centered AI or governance frameworks,
This is what it comes down to in practice, not whether you have a policy, but whether at the moment that matters, someone takes responsibility for asking a very simple question, is this actually correct? And if this case tells us anything, it’s that answering that question consistently is still much harder than many organizations seem to think.
Shel: Boy, isn’t that true. And the first thing I thought when I read this story, because it seems like the organization did everything right, is the question about what are people rewarded for in this organization? I wrote a post about this on LinkedIn probably a couple of months ago, that process speaks louder than any message that you send.
through communication channels. And I included this story that I’ve probably relayed on this podcast 20 times just because it is such a good analogy to really make this clear. The company, the logistics company that was experiencing a lot of breakage of its packages in one of its distribution centers and…
the company just kept sending messages about how important it was to be careful and take the time when you’re loading these packages so that you’re not just throwing them around and you’re not breaking things that customers are expecting to receive unbroken. And the breakage continued and they brought a consultant in, communications consultant I might add, who looked at all of this and found that the reason this was happening was because people were actually being rewarded for productivity.
and not for quality. That meant that they were getting money for doing this quickly. So as long as you were going to pay them more to get this stuff out quickly, the breakage was going to continue. You had to shift the rewards mechanism. So it was rewarding quality. Now they’re going to slow down and make sure everything’s unbroken. Of course, you’re going to lose some of the speed there.
So when I hear the story about this law firm, the first thing I wonder is, yeah, we have all of these policies and we’ve been through all of this training and the governance is in place, but I’m being rewarded for getting this done quickly. And therefore I’m not going to take the time to review the citations that the AI cranked out. I don’t know that this is the case in this organization, but it was the first question I asked.
Neville: Well, that’s an interesting point, Shel, because that was in my mind, too, but it didn’t make it into my notes that they’re known to be, they are, I think, the second biggest law firm globally. They’ve been around 150 years, long and well established, highly credible, super reputation, all that. But they have some of their lawyers charging out at around two and a half thousand dollars an hour for their services. That’s serious money.
And that kind of adds to your point about speed is the focus here, not the quality. Now, to repeat what you said, we do not know if that’s the case in this law firm. But it could be, and your other point you make about that an individual might be saying, I don’t have time to check all this stuff because I’m being rewarded for getting stuff out fast. They have to address that if that’s true. They really have to address that because that perpetuates this. If it turns out that it’s true.
But it brings, I suppose, to my mind, the reality, all these policies, and there’s a lot of reporting on this that’s around if you look for it, talking about some of the training courses, the fact that a lawyer, no lawyer can even use one of their AI tools unless they are certified to have done this, this and this training program or that video they have watched, all that. And yet this happened. So there’s something out of loop here. There’s something not
not working properly. Could it be as simple as the person who signs off on this, i.e., this piece of work is going to that client or this filing is being submitted in this bankruptcy case in New York? This was a major bankruptcy case by not individual. I believe it was a financial company in the Virgin Islands, the British Virgin Islands for that matter, in the Caribbean. It was high profile. But could it really be as simple as that? All this
work going on at speed. that’s probably somebody thought that’s fine, because we’re going to check it all. And yet nobody did. So it signals something we’ve encountered before. And I’m reminded of a case we reported last year about Deloitte. And the issues they had was something similar, but it was not a legal case in a court. It was a report they prepared for a client, which happened to be the government of Australia, and another one in the government of Canada have six figure fees.
riddled with hallucinations and other things. So somebody didn’t check it in that case. I have no knowledge of what training they have in place. In this case, we do have knowledge of what training they have in case. Could it really be as simple as somebody, an individual, isn’t the known responsible partner in the law firm for the authoritative voice on this is okay to send to that client or to that court or whatever.
Even if 15 other people have been involved in checking stuff before, that one person has that responsibility. They obviously don’t have that, suspect. Maybe that’s the solution to this kind of thing. you I know you have some strong views on, you know, having a verifier in place in organizations. You want to talk a bit more about that?
Shel: Well, yeah,
I mean, I’ve said this before. I think that one of the jobs that AI is actually going to lead to the creation of is a verification specialist, somebody who is accountable and knows they’re accountable. They are baked into the process. It gets passed to them and they verify the entire document. I don’t care if it’s an 80 page filing. You know, there was another law firm.
that found itself in this trouble recently. It was in Oregon and the court of appeals there sanctioned the lawyer involved for the AI errors that were in the law firms filing. And the court in its finding…
emphasize that AI isn’t a lawyer and it can’t replace professional judgment or accountability. And that principle travels pretty well. AI is not a communicator. It’s not a strategist. It’s not a lawyer. It’s not an HR expert. It’s not a subject matter expert. It’s a tool. The professional has to be accountable. So for communicators, that means we can’t outsource accuracy. We can’t outsource context. We can’t outsource tone or ethical judgment to a machine.
We can use AI as aggressively as we can find that it helps us do our job, but we still have to verify ruthlessly and we have to make sure that other people in the organization know that that’s part of their remit too.
Neville: Yeah, it’s a very tricky one, think, Shel, given what we know currently about the developments that are happening with artificial intelligence, particularly in generative AI, particularly with tools, particularly like Claude and ChatGPT and Gemini, where and Steve Lubetkin in his comment to one of the episodes from last month alluded to that when he talked about how it is great to, know, deciphering columns in Excel spreadsheets.
Here you’ve got a tool that can actually generate the AI spreadsheet and perform literally everything in analytics, pivot tables that works literally in 20 seconds. And so you suddenly find that here you have a tool that is able to generate content that the traditional way of prompting would have taken considerable to and fro.
And, you know, changes here, editing there, you turn the AI, not this, that’s coming back to all that sort of stuff. And then someone checking it. And here you’ve got the situation where this is accelerating, it can do these things, arguably, and again, depending what it is, it can do things that until now, people would say AI can’t do that. And I’m thinking that, you know, AI is not a communicator. No, it’s itself is not at the moment.
So I mean, this will take us down a rabbit hole if we get into this, which we’re not going to do. But it’s a point worth noting that sooner or later, we’re going to have an AI tool of some type doing something only before a human could do. And then where are we? So again, that’s all a bit in the future and maybe sooner than we think. I don’t worry about that in the sense because there’s no point, Shel. It’s not happened yet. But I do worry about things like this because
This is an easy one to get right, it seems to me, that you got all these policies, et cetera, you got to not so much enforce them. That’s not really the right word to use. It’s to ensure that people follow those policies. Therefore, it’s a communication issue. It’s an educational issue. It’s not a training issue, but it’s education, awareness raising, and getting people to buy into why they should do this, in which case, you’re likely going to have to change your model of rewarding people in that case.
That’s big deal. So this isn’t something that you can idly do except on the kind of surface, i.e., you do all this stuff, you’ve got one person who’s got the responsibility and the consequences will fall on that person if it turns out no one followed the stuff. So that’s probably what would help here.
Shel: Yeah, and I think it’s also worth noting that it’s going to get easier to assume that AI got it right. I mentioned that AI currently isn’t a subject matter expert, but it’s becoming one that we have. OpenAI is creating one that’s just for doctors and Anthropic just signed a deal with a law firm to create a legal specific version of Claude. So, you know, I think when you
look at what happened here with this law firm. We should look at this as sort of a dress rehearsal for AI related crisis response. The law firm did the right thing, right? They acknowledged the problem, they apologized to the court, they filed a corrected version, but at that point, the reputational damage had already been done because that narrative…
had found its way into Reuters, The Guardian, Business Insider, Above the Law, LinkedIn, and all the legal newsletters. And that’s how AI failures will unfold for other organizations, whether it’s out of the legal department or elsewhere. You’re going to have the operational error, then the public narrative, and then people are going to pile on. Communicators should already have holding statements, internal FAQs, and escalation protocols for AI-generated errors, especially
In high stakes content like a legal file.
Neville: Yeah, plenty to think about on this. although the kind of advice I would give is, yes, you’ve got all your policies and so forth, as we’ve been discussing at the beginning, but have you got the human genuinely in the loop to take responsibility for what you’re giving to a client or to a court?
Shel: Well, let’s stick with the AI theme. Hey, that should be no surprise. Gartner is predicting that by 2028, 75 % of employees will rely on chatbots to get relevant internal communications. That’s not the distant future, folks. It’s the year after next, and that should stop every internal communicator in their tracks. Not because chatbots are coming for the intranet or the newsletter or the manager cascade. That’s just
Too simplistic. The bigger shift is that employees are moving from browsing to asking. They’re not going to hunt through the intranet and a stack of emails to get an answer to a simple question. They’re just going to go to the chat bot and ask, what has this changed for me? Do I need to do anything by Friday? Why is my department being reorganized? And they’ll expect an answer in seconds and probably get one. The Gartner prediction is based on a very real problem.
information overload. According to Gartner’s report, employees who report high information overload are 52 % less likely to report high intent to stay with their organization, so it’s a retention issue, and they’re 30 % less likely to report high strategic alignment with the organization. Gartner also says chatbots will provide personalized curated answers for pull communication and customized alerts for push communication.
That’s a major shift in the employee communication model. Now there are real benefits here. A well-designed internal chat bot can give employees faster answers, reduce HR and IT ticket volume, provide 24-7 support, support multiple languages, and cite authoritative sources so employees know where an answer came from. It can also deliver information within the flow of work rather than forcing people to go somewhere else to find it.
But here’s the part communicators are going to need to wrestle with. An AI answer is not the same thing as communication. An answer can tell an employee what
An answer can tell an employee what changed. It may even summarize why it changed. Will it preserve the intent, the nuance, the context, and the emotional intelligence of the original communication? There’s no guarantee it will. Take changed communication, for instance. We frequently write detailed articles explaining the rationale for a change because employees need more than the transaction. They need to understand the business context. They need to know what
options leaders considered and which options they discarded and why. They need to hear what’s not changing. They need some sense that the decision was made thoughtfully and not arbitrarily. But what happens when no one reads the article? What happens when the employee asks the chat bot, what’s changing in our benefits plan and gets a clean, accurate three sentence answer that strips out the rationale completely?
This is where internal communicators have to evolve from being message producers to knowledge architects. The intranet still matters. It may be less of a destination and more of the trusted knowledge later that feeds AI. Frank Wolf made this point really well in PR Daily. AI doesn’t eliminate the intranet’s jobs. It changes how pull, push and people centered communication work.
The intranet becomes the foundation that makes chatbot answers reliable. If the knowledge layer is messy, if it’s outdated or written in a way that AI can’t interpret or interpret well, the chatbot’s going to sound confident, still be wrong. This means we have to consider an expansion of the internal communicator’s job. Yeah, we still need to write, but now we also need to structure. We need clear source of truth pages and metadata.
We need FAQs that anticipate employee questions, and we need version control, expiration dates, and more. We need to decide which information can be answered directly by a bot and which question should trigger a human response. And we need to design for narrative preservation. That means writing source content with AI retrieval in mind. If the rationale for a change matters, don’t bury it in paragraph eight. Make it explicit. Label it.
Repeat it in a concise why this matters section. Smart brevity writing would be a great approach to adopt here. Create approved answer blocks that the chat bot can draw from and test the bot by asking questions employees are likely to ask, then check whether the answers reflect not just the facts, but the intended meaning. This also has implications for measurement, by the way. Page views and open rates become less useful
if employees are getting answers without opening an article. We’ll need to measure the questions employees ask, the quality of the answers they receive, the content gaps the bot reveals, and whether employees understand the strategy, the change, or the policy after interacting with the system. It’s a lazy conclusion to say employees won’t read anymore, so let’s just give them chat bots. The better conclusion is employees are changing how they access information
So we need to make sure the organization’s knowledge, context, and narrative survive that shift.
Neville: Hmm. Yeah, this is a huge topic, Shell, because what struck me listening to you was in a sense of continuity of what we just talked about in the previous topic is the verification of content that an AI produces for you. How are we going to deal with that? We talk about putting in place, you know, trusted sources for all this information. So, you know, let’s say I’m an employee, I’ve asked a question on something, it’s given me an answer.
I need to check that. So what do I check that? And how do I know that it’s accurate? So you project that out to the kinds of stuff people deal with daily. And this is a huge undertaking, I would say, because looking at that article, talking about this, it has an interesting piece in there about the safeguards that CCOs are mentioning here specifically.
will need to put in place to mitigate the risks of hallucinations, misinformation, and a fragmented landscape that comes with AI, they say. CCOs will need a greater emphasis on information quality, as well as an optimizing intranet content for AI searchability. You mentioned that point. They must also partner with IT, HR, and legal to establish robust governance to ensure that chatbots responses are accurate. That’s the bit. How are they going to do that? Because something internal,
It surely isn’t going to be only producing answers based on what it finds on your internal networks alone. It must be looking out onto the wider landscape. How do you verify and check all that? That’s a major debating point for taking this further, it seems to me. So it’s a huge undertaking.
Shel: Yeah, think one of the things, and I sort of breezed through it pretty quickly, but I think that we’re going to need to figure out is how do we monitor and assess what questions employees are asking that produce an answer that’s drawn from internal communications content, whether that’s in an email that went out or something that was posted to the intranet. How do we monitor the questions that are being asked and the effectiveness of the responses so that we can make adjustments?
So that we can report that, yeah, we can determine that there is alignment on why this change was made, or we can say, gee, people are just getting an answer that tells them what the change is, and they don’t have any understanding that we looked at alternatives and we tried to find a better solution. And this was the best we settled on. And here’s why it’s good for employees or here’s how to cope with this in your department or whatever it may be. And to do this without
necessarily surveilling employees, right? We don’t want to know who asked the question. I think it would be great if we could say, wow, look at this. 70 % of this particular point of confusion that was illuminated by the questions that we’re asking are coming from people in our operations division and not other divisions of the organization. That would be useful, but we don’t want to be able to say John Doe asked this question. What an idiot.
It’s a serious issue. And I think the guidance that we need to have the information in multiple places where the AI can see it so that it realizes that this is an important topic because it is in several places and that we have it in several formats, the FAQs, the answer blocks. This is repurposing the original content in ways that will help ensure
that the AI inside your organization is delivering information with context, with those other elements that’s so important for employees to understand to create that alignment. And by the way, I mentioned the seven Cs of the new communication compass. One of them is congruence, which we’re arguing goes beyond alignment, that there is congruence in the organization. So.
If we want that, and it is important, it’s one of the reasons there is an internal communications function, we really need to start rethinking what we’re doing and how we’re doing it.
Neville: Yeah, agree. I think surveillance is a very, very slippery topic and a slope. Because you’re going to have to have some kind of process in place and it probably surveillance is the correct label. Otherwise, you have really struggled to to find the answers you will need when you if you roll out something like this. So I think
You know, we’ve reported recently on keystroke logging and other ways organizations are now requiring in place to monitor whether employees are working or not. And it’s still making news headlines in the tabloids here, a case recently about someone had this wheeze of having something that touched his keyboard every now and again to show it was working. The trouble is that the employer was savvy enough to have the software could tell which key and it was the same key all the time.
So things like that, probably going to have to rebalance this privacy versus being able to see what people are doing algorithm, let’s say. And that’s going to be difficult, given the history, I suppose, of some organizations not respecting employee privacy. Look at the China model, and that’s not what we want to have here.
state surveillance on everyone’s daily lives is pervasive in urban areas, not necessarily throughout the whole country. So do we want that? We may not actually have the ability to say no to that, given what they need to do. So that’s part of the issue to include, I think.
Shel: Yeah, and I think one other thing we’re going to have to do is more asking. We’re going to have to survey after a change and ask employees if they understood the reason for the change. And part of the problem with increasing the number of surveys, and I’ve made this argument for years, that people will take surveys all day long if they see the results of the surveys and they see that things are going to change.
If you’re asking people, did you understand this? Did you understand the rationale for it? Do you agree with it? It’s hard. I mean, you can report the results, but what’s going to change? You’re going to change maybe the way you’re producing content. That’s not going to be visible to employees. So it’s going to be a challenge to ask those questions frequently without producing that kind of survey fatigue that we hear so much about.
Neville: Big topic. OK. OK, so there’s a widely held idea about AI that’s been around almost since the beginning, that it would be a great leveler in the workplace. It’s kind of continuity of what we’re talking about here, The thinking was that if you give everyone access to powerful tools that can write, analyze, summarize code, and generate ideas, then people with less experience or fewer formal skills
should be able to close the gap with those at the top. But what we’re starting to see in the real world looks quite different. In fact, it may be doing the opposite. The Financial Times has just published new research based on a survey of 4,000 workers in the US and the UK. And the findings are pretty stark. More than 60%, a 6-0, of higher earners say they use AI every day in their work. Among low earners, that number drops to just 16%, 1-6. That’s a pretty big gap.
So instead of leveling the playing field, AI adoption is heavily skewed towards the people who are already ahead, better paid, more experienced, often in more knowledge intensive roles. I think it makes sense because using AI effectively isn’t just about having access. Most people have access. It’s about knowing what to do with the tools. It’s about having the confidence to experiment, the context to apply them to real work, and the judgment to assess whether the output is actually useful.
And those are things that tend to come with experience, with education and with the kind of roles where you have a bit more autonomy over how you work. There’s a line in the research from one economist that really captures this shift. The more intelligent the technology becomes, the more your own intelligence matters. If you already have expertise, AI can make you faster, more productive, maybe even better at what you do. But if you don’t yet have that foundation, it’s much harder to extract real value from it.
There are other factors at play too. The research points to corporate training as one of the biggest drivers of AI use at work. So organizations that actively support and encourage adoption are seeing much higher uptake. And interestingly, the heaviest users of AI aren’t the youngest workers, as you might expect, but people in their 30s with more experience behind them. So again, this isn’t a generational story so much. It’s about how AI fits into the structure of work itself.
If AI is boosting the productivity of higher earners more than lower earners, then over time you’d expect the gap to widen in output, in value, and potentially in pay. And there’s a second order effect that’s a bit more subtle, but potentially more significant. If AI starts to take on some of the routine or entry level tasks that junior staff would traditionally do, then where do people build the skills? How do you develop expertise if the work that teaches you the fundamentals is increasingly being handled by a machine?
So instead of AI acting as a ladder, help people climb, there’s a risk it starts to pull away some of the rungs. And this is where it connects directly to leadership and to communication. This isn’t just about who has access to AI tools. It’s about who feels able to use them, who is encouraged to use them, who is trained to use them well, and who is supported in making sense of what they produce. So this is about culture, not technology.
If organizations simply roll out AI and assume the benefits will spread evenly, they may find the opposite happens, that they’ve unintentionally widened the gap inside their own workforce. So perhaps the real question here isn’t whether AI will level the playing field. It’s whether leaders and communicators advising them are actively shaping how that playing field is changing or just watching it tilt.
Shel: Yeah, that training point is, I think, really critical. And a project manager, an accountant, field supervisor, an HR business partner, and a communication specialist don’t need the same training. They also don’t need the same examples delivered by communications. So the communicator’s contribution here is translation. Here’s what this means for your role, for your task, for your team, and your day.
That includes, you know, surfacing success stories from unexpected parts of the organization. I would love to find an example of, a foreman on a construction project site using AI. I don’t want to just report on what the IT department is doing and the other, you know, tech forward departments are doing. You know, the goal shouldn’t be everyone becomes an AI expert. The goal should be that nobody’s quietly excluded from
the next operating model because they don’t see how AI fits in their work.
Neville: Yeah, we’ve talked about this multiple times, Shell, in various episodes, which is who feels able to use such tools. And that comes, that stems from the leadership communication, in my opinion, that has to encourage people to do this. They feel they’re being empowered. They feel they’ve been given permission to do this. And they know they can count on help to when they get stuck with something. That, is
hardly uniform in just any organization, frankly, and this isn’t about, we’re going to create the special department to do this, this needs to permeate across an organization. So you’ve got leadership at the very highest level, filter that down to your local manager, your line manager, or whoever you report to has got to encourage you as well. And I’m sure that’s that happens in many organizations, but to make this really work, that doesn’t result in the gap widening between those who are
naturally excited about this and have the experience and the knowledge and the expertise to know how to get value out of these tools. You’ve got to have something in place that helps everyone else who isn’t like that. And there’s a challenge for communicators without any question.
Shel: Well, for the whole organization, I mean, as I talk to people in other companies, it seems that we’re still in that experimentation phase that I think most organizations should be beyond by now. But the way it’s working right now in a lot of companies is the curious employees can try the tools and cautious employees can wait and everyone else will eventually catch up. that’s not going to work. mean, if this is becoming a material productivity and capability layer,
Neville: As well.
the
Shel: We need to implement intentional adoption strategies. That means making role specific examples and approved tools and safe use guidance and peer demonstrations. And we’re trying to do this where I work is get peers showing other employees what they’re doing. Psychological safety, plain language explanations of what employees are supposed to be doing with this. So all needs to be put in place and communications has a role to play here, but
If we don’t, adoption is going to follow the path of least resistance, and that’s toward the people who have the power, the time, and the digital fluency, and then you’re going to end up with that gap.
Neville: Work to do here.
Shel: Yeah, and by the way, there was another part of the FT’s reporting that I found really interesting that you didn’t mention. And that’s that men are more likely to use AI tools than women across a number of sectors. And I think that should concern leaders because AI fluency is becoming part of professional competence. And if men, along with higher earners and more experienced workers, are building fluency faster,
What’s going to happen? And you know, the performance evaluations and promotion decisions and the visibility of the employees who are getting that kind of attention and informal influence may start reflecting AI access rather than raw ability. And here again, there’s a role for communicators by pushing AI enablement into say, manager toolkits, into your onboarding processes, into your training and team level norms is important.
as opposed to just letting it sit as an informal advantage for people who are already competent.
Neville: Yeah, like I said, work to do here.
Shel: Thanks, Dan. I am looking forward to seeing this WordPress release. I have to say, I really like the idea of collaborative editing. As you know, the FIR podcast network website is on WordPress and Neville, you and I both use it and the ability for both of us to go in and deal with that in more of a Google Docs setting than logging in and just pulling up the post.
makes sense to me. I definitely do see the issues with this as well, though, but it’ll be interesting to see this and the other changes as well. So thanks for the report, Dan. Really, really interesting. Well, we’re to stick with the AI theme again, probably not surprising given the impact that it’s having. And by the way, I have to say that when I scroll through LinkedIn, it’s got to be 80 % of the posts I see now are
AI related and that’s not hyperbole. It is a guess. I haven’t measured, but man, it is all AI all the time on LinkedIn. That’s what people are talking about. And it’s changing the relationship between PR professionals and journalists, just not the way a lot of people expected it would. The fear was that AI would automate the work. We’d have a lot of AI written press releases and AI written pitches and articles.
And yeah, there’s definitely a lot of that happening and people are calling it out. But the more interesting shift is not that AI makes it easier to produce more content, it’s that AI makes bad media relations more obvious and more damaging. Pete Pachal, who was a guest on FIR interviews, what was that Neville, about a year ago? Yeah, he makes this point in an article in Fast Company, AI is becoming a new interface.
Neville: A year ago,
Shel: For how information is found, prioritized, and interpreted. Journalists and PR people are both affected because AI systems more and more shape which stories surface, which ones get cited, and which narratives get visibility. Pete’s argument is that the advantage doesn’t go to the people who can generate the most material. It goes to the people who produce original reporting, useful expertise, clear narratives, and trusted relationships. That’s an important distinction for people who…
operate in the media relations world. AI can help you write faster, but speed was already part of the problem. Journalists were already drowning in irrelevant pitches before generative AI showed up. AI just gives every mediocre PR practitioner a way to send even more mediocre pitches even faster. The results not greater efficiency, it’s more noise. And journalists are noticing.
PR Daily reported on a global results communication survey of nearly 1,700 reporters across print, digital, and broadcast. 81 % said pitches and relationships with PR professionals are vital to their work. So journalists aren’t saying we don’t need PR, but 43 % expressed negative views about AI-generated pitches, saying they read like a bot wrote them and that they lack perspective and erode editorial trust.
So here’s the conflict. Journalists still need PR. They need access and sources and data and context and story ideas, but they’re getting a lot less tolerant of anything that feels mass produced, poorly targeted or synthetic. Medianet’s 2026 Media Landscape Report, based on feedback from 800 journalists, makes the same point more sharply. The report says three quarters of journalists have received pitches that appeared to be AI generated.
and about half said they could always detect machine written copy. I would argue with that, but let’s not go down that rabbit hole. The same report says 86 % of journalists now cite press releases as a key news source, which means the press release isn’t dead, but the stakes for credibility are higher. There’s also a widely circulated LinkedIn post citing the media net research saying 78 % of journalists report that receiving an AI written pitch
decreases their trust in the PR person who sent it. That’s consistent with the other findings. Journalists aren’t rejecting AI assistance, they’re rejecting lazy use of AI. So what should PR practitioners be doing differently? I’ve got five things. First, stop using AI as a pitch factory. This is the most obvious trap. If the output is a generic email with a personalized opening line,
and a weak story angle, AI hasn’t made you better, it’s made you faster at getting ignored. Second, use AI before the pitch, not as a replacement for your judgment. Use it to analyze everything the journalist has written recently, summarize themes, identify gaps, pressure test whether the angle is timely, and prepare sharper source material.
P.R. Daley’s piece makes this point well. AI can help with research, angle testing, drafting, editing, personalization, and follow-up prep, but the human edit is where you add that credibility. Third, bring journalists something they can’t get from a model. That means original data, direct access to informed sources, a useful articulate expert, a local angle, a contrarian but defensible point of view, or a story that fits the reporter’s audience.
Fourth, be transparent internally about what AI can and can’t do. PR leaders should have rules. AI can help research, structure, brainstorm, and edit, but it should not invent relevance, fake familiarity, fabricate personalization, or send anything without human review. And fifth, think beyond the pitch. In an AI-mediated media environment, you’re not just trying to get a reporter to open an email.
you should be trying to build a public record of expertise and credibility. That includes owned content, executive visibility, contributed thinking, data assets, analyst material, podcasts, newsletters, earned media, anything that reinforces a coherent narrative that AI systems will recognize and retrieve. So the future of media relations isn’t more automated pitching.
The future is more precise, more evidence-based, more relationship-driven, and more strategic. AI will handle more of the mechanics, but judgment, relevance, trust, and access become more valuable, not less. In other words, AI doesn’t eliminate the relationship between PR and journalism. It raises the penalty for abusing it.
Neville: Yeah, it’s a it’s it’s an interesting topic without doubt. I was actually pretty impressed with the five points mentioned by Courtney Blackan in the PR Daily report. And it mirrors, frankly, almost everything we’ve talked about in this episode so far. And indeed, in recent episodes that we have to keep repeating this really, Shel, and you’ve done a good job, I think, at outlining this is what you’ve got to do.
And it’s about this, but it also relates to these other 10 things we’ve talked about. But a couple of things that struck me here that really, really do resonate well. mean, research smarter, that makes complete sense. I mean, that’s got to be your starting point. But things like draft faster and edit harder. I like that one, I must admit. So you use AI to an AI tool to organize your ideas.
into a structured draft or just simply improve the overall language that you’ve done and rewrite some of it. To kind of anticipate criticism from those who don’t think I should be involved in any of this, I liken it to that’s what you’d be asking a colleague to do or that freelancer that you’ve hired to help you work on this. You’d be giving them the same request as you would to the AI tool.
So what’s the difference? One’s not a human. That’s probably the biggest difference. But I don’t get swayed by any of those arguments about you can’t use AI to do this. Now, of course, you’ve got to use it. The caveat is, for God’s sake, don’t just copy and paste that into your document in the center. Now, this is your assistant, not your creator. You’re the creator. And this helps you create very well, typically, all other things being equal. But I like that draft faster, edit harder.
And it’s kind of like your A-B testing or A-B-C testing possibly with the AI assistant to help you do this quite quickly. And it’s great. Personalize with precision is another one she mentions. Don’t blast out the same email to 50 journalists, which is what many people do it seems to me. You’ve got the ability to personalize those emails. And again, you know,
Your AI can help you with drafting that. So it will need to know quite a bit about the journalists and your relationship with them if you’re the PR person. So quite a lot of prep work you’d need to do here first. But the output from the AI will be pretty good if you do it right. So these are things that take your method that you might currently be using, which is prompt the AI to a totally different level.
And that’s what you got to be thinking about now because this is where it’s all going. This is not way beyond just simply a chat bot. So it’s a really good topic. And these reports that you’ve highlighted, Shel, are great. Pete Pachal’s post is excellent. We’ve got to him back for back again from another interview, I think, because we interviewed him when he was just starting his business. And he’s gone places for that business now. So it’s worth reading.
Shel: Yeah, I think so.
Neville: That and the PR Daily report. do like those five points.
Shel: Yeah, remember, remember we interviewed Aaron Kwittken from PRophet, that’s PRophet with a capital PR. And one of the things that’s, yeah, it was. And one of the things that system did was identify reporters who have written about this topic. It reviews the content that they have written over the past recent period and crafts a personalized pitch for each of them, which then you can go in and edit. I don’t think you
Neville: Yeah, I do. Yeah.
Quite a while ago.
Shel: Sorry, Aaron, if you’re listening, I think you provide a great service, but people don’t need this anymore because you can create an agent that does that, identify the reporters who have written about this, review their most recent articles and craft a pitch for this press release. That can be done now internally with an agent that would probably take about an hour to create. mean, agents can go out and do amazing things now. Chris Penn.
just wrote a post, he found somebody’s wallet on the street and it had enough stuff in it that he could spend a few hours tracking down who owned it. There wasn’t a driver’s license with an address. There was some cash, there were a couple of debit cards, but he was able to give an agent all of that information and go do his work on something else. And after a couple of hours, it said, I’ve narrowed it down to these three people. And Chris was able to look at those three people and figure out which one it was.
lived really nearby and got the guy’s wallet back. We can do this kind of thing now in pursuit of PR objectives. The other thing I want to say is that I’ve gotten in the habit now of recording interviews and giving the transcript to AI and say, organize this into a first draft of a press release of an article of a change notice, whatever it might be. And I don’t copy and paste that in. That’s a first draft. It’s absolutely.
a case of draft quickly and edit hard. I hadn’t heard that framing before, but it’s absolutely what I do these days because it just saves a lot of time and gets me into the nuts and bolts of making this relevant without having to spend half that time just reviewing the transcript and organizing that into sort of a logical flow.
I think it’s a great use of AI and it’s one that I’ve been using for, geez, a couple of years now.
Neville: Is. I agree. So there are things that you’re accustomed to that work for you. But pay attention to this kind of thing, because this is taking it to another level that will benefit you. You just will not just but you need to clearly understand what this is. And Pete’s article, the PR Daily piece are two sources that will help you do that. But it’s definitely worth a look.
Our next story is a very different one. It’s not about something going wrong. It’s not about AI, but about something going exactly to plan. Apple announced that Tim Cook will step down as CEO later this year and become executive chairman with John Ternus, currently head of hardware engineering, taking over the role. On the face of it, this is a major moment. A CEO transition at a company of this scale. It’s what, revenues?
trillion, second wealth, the most valuable company in the world currently. It was number one not long ago, so it might retain that. It often creates uncertainty internally in the markets and across the wider ecosystem. What’s striking here is how little disruption there seems to be. There were no leaks. The announcement landed cleanly. The market reaction was muted and the tone throughout is calm, controlled and focused on continuity.
And that’s the real story, I think, because this isn’t just a leadership transition. It’s a masterclass on how to communicate one. If you look at the messaging, everything reinforces stability. isn’t disappearing. He’s staying involved as executive chairman. Ternus isn’t positioned as a bold new direction. He’s presented as a long standing insider, deeply embedded in Apple’s culture and products. There’s no sense of rupture, just a steady handoff.
The most important part of this story though is the announcement itself. It’s what happened before the announcement. Ternus didn’t appear overnight. He’s been gradually made visible over several years, fronting product launches, appearing in keynotes, becoming a familiar presence. So by the time this announcement arrives, it doesn’t feel like a surprise. It feels like confirmation. And that’s the key insight. This transition didn’t start with a press release. It started years ago.
What Apple has done is build familiarity, credibility and trust in the successor long before the moment of change. So when the change comes, the narrative is already understood. And that changes everything because most organizations treat moments like this as announcements, whereas Apple treats them as outcomes, the result of a story that has been deliberately shaped over time. That has practical implications because when transitions feel chaotic or disruptive, it’s often not because the change itself is unexpected.
is because the story hasn’t been prepared. The successor isn’t known, the narrative isn’t clear, the organization is reacting in real time. Apple avoids that entirely, not by communicating more in the moment, but by communicating earlier, by building trust before it’s needed. And that’s where this becomes relevant for leaders and for communicators advising them. The real question isn’t how do you announce a change, it’s how early you start preparing people to understand it.
Shel: Yeah, they didn’t treat this as a sudden disclosure. This was more continuity without pretending that nothing was changing. Right. It does a lot of reassuring work, not just about Cook’s remaining in the new roll through and actually staying in the current role through the summer and then staying as executive chairman, who’s going to work closely with Ternus during the transition. Also talked about
Ternus’s ties to Steve Jobs and Apple’s mission and its values. And that language isn’t an accident. I think the lesson for communicators is that the leadership transition needs facts and emotional reassurance, right? Employees don’t just wanna know who reports to whom. They wanna know whether the company they believe in is still the company they believe in. I do like in PR Daily’s report, the discussion of different audiences.
They didn’t send one announcement everywhere. They had public messaging and employee facing messaging and they both serve different purposes, right? The public version celebrated the legacy and confidence they had in this transition. The employee version was warmer. It was more grounded. And I mean, this is communication 101 in a lot of senses, but still something that we should emphasize. Consistency doesn’t mean identical language. Employees…
deserve a message written for employees, not a copy of the press release with Dear Team pasted on top.
Neville: I agree with you, Shel. This is an excellent example of how to do that. And yes, there wasn’t a single message. That’s very true. It was tailored messaging that showed clear understanding of those different audiences internally and externally. So a lot you can learn from that. And indeed, Ragan’s article by Allison Carter has some good insight in there that you can glean learning from. It’s worth reading that article too.
So I call it a masterclass. It probably is one of the best examples I’ve seen. Not so much the press release, but understanding about what led up to that and all the other communication that then occurred, the buildup. And I realized too, of course, that some organizations won’t know until nearer the time of announcement that there’s going to be a change. So this isn’t necessarily a blueprint you can apply to everything. But in the case of Apple,
It’s a very, it has a big effect on people news about what Apple’s doing. Steve Jobs was a magnetic personality or mercurial personality who famously coined this great phrase, I’d apply to Trump, often the reality distortion field that was his trademark in a sense that he was mercurial without doubt in leading. one thing that is notable, although it’s certainly not emphasized in any way that
that Ternus is a hardware guy, whereas Cook is a management guy. So Cook took over from Steve Jobs and transitioned Apple over that decade and more period to where we are now with the changes going on in the world generally and the tech industry in particular, that it probably does require more of a hard-nosed technological approach than purely business management to leadership.
And of course, if Cook’s going to be the executive chairman, he’ll be there to assist here and there. interesting time looking at a company like Apple to see this happen.
Shel: Yeah, and the press release sends some messages without explicitly saying anything. First of all, the fact that they did pick a hardware hardware guy says a lot about where Apple is heading. They faced a lot of criticism for their failures around artificial intelligence, which isn’t even mentioned in the press release in that sense.
Neville: Yeah, does.
not mentioned.
Shel: Message. What have been Apple’s wins under Cook’s leadership? I mean, the Apple Watch was a big one, and a lot of people thought it wasn’t going to be. They kind of laughed when it was introduced. But there are a lot of people wearing Apple Watches out there now. Big success. But mainly he consolidated manufacturing in China, which may not end up being a great thing. But it’s he I mean, it’s made them a ton of money. What did he do? Triple?
their revenues, as you said, they’re the number two most valuable company in the world. Now they’re gonna refocus on hardware, on product, the stuff that has made Apple from the get-go. Software, mean, you can talk about iOS and the computer software platforms they produce, but you never hear a lot of…
Discussion of those at their big events. It’s, you know, we’re coming out with a watch. We’re coming out with a Vision Pro, which has been something of a failure. So this is a reemphasis on hardware. They’ve made that point. Casey Newton came right out on hard fork and said that Ternus’s first act should be just doing the deal with Google to integrate Gemini into Siri and be done with that whole thing because Siri was supposed to get that AI update.
It’s been a couple of years now and it just hasn’t happened. So they have said a lot in this press release and not all of it was necessarily explicit.
Neville: No, you’re absolutely right there. So interesting time in the tech industry generally and seeing where what happens with Apple in the coming year or so.
Shel: Well, my favorite new word is slopaganda, and this refers to AI-generated propaganda, that cheap, fast, emotionally loaded, and designed less to strategically persuade anybody about anything than it is to just flood the zone with images, memes, fake scenes, shareable outrage. The most visible example of slopaganda right now is Iran’s use of AI-generated, Lego-style videos aimed at Donald Trump, Israel, and the U.S.
They’re far from subtle. show caricatured Lego versions of Trump, Benjamin Netanyahu, missiles, burning ships, collapsing American power, and they use rap tracks, absurdist humor, conspiracy references, and the visual grammar of social media and not the language of state diplomacy. The New Yorker reported on Explosive Media, which is an Iranian digital media enterprise, which got started posting
pretty routine anti-Western content that didn’t get a lot of uptake. Then they discovered that these AI-generated Lego-style propaganda cartoons was its breakout format. The clips accumulated millions of views. They were reshared by Iranian government accounts. They were promoted by Russian state media and even picked up by anti-Trump protesters because the imagery was so flamboyantly anti-Trump.
The group told the New Yorker that it could produce a two-minute video in about 24 hours. LeMond adds an interesting scale point. According to Cyabra, a company that analyzes content to distinguish authentic activity from coordinated manipulation, that’s right off their website, according to them, pro-regime videos received more than 145 million views across X, Facebook, Instagram, and TikTok during the second half of March.
Explosive media eventually acknowledged to the BBC that the Iranian state was one of its clients. It had initially claimed that it was all independent. And this has captured a lot of attention, first because it’s visually disarming. Lego’s familiar, playful, global. It turns geopolitical violence into something that looks like entertainment. Analysts say the Lego format serves as a kind of Trojan horse.
reaching people who wouldn’t otherwise engage with war-related content. It also works because it’s emotionally true to people who’ve always wanted to believe the underlying message. Viewers may not literally believe Iran is winning the war in the way videos depict, but they can choose to believe the emotional premise that the U.S. is weak, Trump is ridiculous, and Iran is standing up to a global oppressor.
And it works because it speaks the language of the target audience. This isn’t old school propaganda. It’s fast, caustic, meme-literate, and platform-native. In information warfare terms, this gives Iran something it used to lack, cultural reach into Western audiences. It lets Iran fight asymmetrically using ridicule and narrative disruption where it can’t match the U.S. militarily. But this is not only a geopolitical story.
The same tactics are going to show up in business. Maybe not tomorrow in Lego form, but the pattern’s just too useful to stay confined to politics. An activist shareholder can use an AI-generated video to ridicule a CEO, to dramatize a company’s alleged mismanagement, or turn a dry governance dispute into a viral morality play.
A disgruntled customer could generate convincing scenes of product failure, employee misconduct or customer mistreatment. A labor dispute could be amplified with synthetic stories that blur the line between real worker grievances and invented incidents. An unscrupulous competitor could see just asking questions content that implies safety failures, financial instability, executive hypocrisy or environmental misconduct. An example from Canada.
matters here. The Canadian Digital Media Research Network identified a coordinated network of 20 inauthentic YouTube channels targeting Albertans with nearly 40 million views. The channels exploited real grievances and pushed narratives normalizing a move for secession and even U.S. annexation of the province. The report says the accounts pushed an Albertan perspective
that researchers found absolutely no evidence that the account owners were actually Albertan. That’s the bridge to business. Slopaganda doesn’t have to invent grievances. It can exploit real ones. A company with a safety incident, a layoff, a product recall, a labor dispute, or an unpopular executive decision is already vulnerable. AI just makes it easier for hostile actors to package that grievance
into emotionally potent, shareable content. So what should communicators do about this? Well, first, obviously, build monitoring capability for synthetic narratives, not just mentions. The risk isn’t one fake video. The risk is a pattern. Repeated themes, recycled scripts, coordinated accounts, sudden spikes, and emotionally consistent attacks. Second, prepare your verification protocols now.
If a video appears showing something damaging, who determines whether it’s real? Legal? Security? Coms? IT? Outside forensic consultants? You know, that first hour is really important. So knowing who to go to to find out whether this is real or not is really critical. Next, strengthen your owned record. If AI systems and social audiences are going to interpret your organization through fragments,
Make sure there’s a clear, accessible, credible body of truth. Your policies, your timelines, FAQs, source documents, leader statements, and plain English explanations. And finally, scenario plan for synthetic outrage. Not just misinformation, but ridicule. Means move differently than allegations. A dry correction rarely defeats a funny attack. Communicators need response options that are fast, human, factual, and proportionate.
And, you know, one last question to address here, should communicators use Slopaganda themselves? No, they shouldn’t. Not if we’re talking about deceptive, synthetic, emotionally manipulative content designed to obscure truth. That’s not communication, man. That’s reputational arson. But communicators absolutely should learn from the format. AI-generated creative can be used ethically.
if it’s clearly labeled truthful, brand safe, and grounded in real information. But understand attention has moved toward visual, fast, emotionally resonant storytelling, and we should move along with it.
Neville: Yeah, it’s an interesting topic, isn’t it, Shell? I think you’re kind of no communicator should not do this. That’s a message clearly the US government’s ignoring, judging by what they have been doing, or the White House, should say, that’s reflected back in what the Iranians are doing and their proxies and indeed in individuals by the thousands doing the same. So misinformation, disinformation, fakery, it’s everywhere.
I read a post about this at the end of March that looked deeply into what AI generated, how it’s being used by both sides. And there’s a number of reports, notably Deutsche Welle, the English language news service from the German broadcaster and France 24 as well, had some really, really good, well researched articles with examples of
what’s happening in this area. There’s a great one someone posted showing a Lego box of, you can visualize it from the description that we see on the TV news all the time. Residential buildings, apartment blocks in ruins blown to bits. And this is made out of Lego bricks. And it’s, you know, Lego logo, and it looks exactly like a Lego product. So brand
a brand is being, you know, brought into this unwittingly. But the reality is that communicators are in between a devil and deep blue sea here, I think, because if you’re in a business, you’re not in the defense industry, you’re not involved in anything with a war going on. Yet some of your clients are kind of on the fringes of all this by the nature of their business. So
they’re dragged into in the case of Lego, good example, what do do about it? Do you respond in kind with some kind of jokey thing about the, you know, this, you know, whatever it might be in Iran, this example. It’s a it’s, it’s a call, I would say, there seems to be a movement, if you like, to this kind of thing is a matter of normality.
And I think it’s very dangerous. Philip Boramantz had a really good piece in the middle of March on what this war is teaching us about communications generally, not specifically crisis, although it’s mentioned in there. The BBC had a report early March about AI-generated Iran war videos, surge of those as people have the tools to create these things. So that is part of our landscape.
So it’s something that communicators, it’s a question with no easy answer. The question you asked is not an easy answer, but it may be the one that we have to find an answer to. I mean, that’s easy to say that. I mean, I don’t know what that answer is. So it is that the most striking thing occurred to me is that the, not the sophistication of these tools are not slickly produced. They are produced.
I could you say it’s for those who are savvy with social media and social networks and what works in terms of spreading what is spreadable? What is memeable? And we are not part of that. And if you’re not, you’re people talking about your brand and you and you’re not there. So, you know, it’s a big question.
But it’s something we have to try and understand what’s happening and somehow come up with an answer.
Shel: Yeah, you raise an interesting point about brand safety. Is Lego going to issue a takedown notice to Explosive Media, which is a digital media company in Iran, probably not knowing that they’re not going to respect a takedown notice or there’s no court that you can go to necessarily. So basically you kind of have to live with it if you’re Lego. And I suspect that’s what they’re doing. But from a communication standpoint,
It’s really important to understand that the Iranian produced stuff is getting far more traction than the US produced stuff. And the reason for that is that it leverages grievances the Iranians already had and other people around the world already have with the US. The US stuff is just showing the attacks on Iran. And if you think about the average American or perhaps even the average Brit, what grievances do they have against Iran?
I mean, the grievances here are within the government, not within the broad population. And that’s why these are so effective, is that the Iranians and other populations around the world do have grievances, justified or not. So as you look at this stuff moving into the business world, consider what kind of grievances people might have with your organization. That’s where they’re going to attack you. That’s where you have to build up your defenses now before they do.
Neville: Yeah. So it’d be interesting to see where are we at nearly halfway through the year, not quite actually. So quite a bit less than halfway. Yeah, it is. But it makes me think I wonder what, you know, the big picture of trust and the reporting we see on that notably the Edelman Trust Barometer. What changes are we to see as this year plays out as it were? We have a war in the Middle East that
Shel: It’s still going pretty fast.
Neville: Anyone who even has a fleeting interest what’s going on in the Middle East knows that this is situation that has been the case for millennia, frankly. But in modern times, this has been since 1948 in the creation of the State of Israel. This has been happening in the Middle East. A war one way or another between tribal factions and then states have gotten involved in this. Iran
from what I can understand, has long been a thorn in the side of US governments over different presidents over the decades. It doesn’t resonate that way here, notwithstanding some things that have happened, but there were decades ago in the UK that, you know, the notion of the fact the US really was the only country that could do what they did to to bomb Iran and start a war to undeclared.
not asking anyone to help them and then complaining when no one came to their help. So it’s a dreadful situation, the war itself, obviously, but also the murkiness of what it has created in the context of what we’re talking about. That, you know, we’ve talked about this element before, which is that, you you do not control the message anymore, even if it’s about you. That is no
more true than what we’re seeing right now. The Iranian government doesn’t control any of the messages, not really. It’s anyone who’s got an internet connection and a tool to create an AI generated video or whatever it might be, and then share it online. That’s who’s got control, but only in a limited way because it then goes out there and anyone can do anything with it. It’s making it onto some
traditional media, not just social. So it’s who knows where it’s all going to go, Shell. And as this war continues without any sign that it’s going to suddenly stop, this is the new normal.
Shel: Yeah, and keep in mind, we’re not talking about a single piece of content. We’re talking about flooding the zone with multiple pieces of content that reflect the same grievance and make the same points and have the same punchline that get people to watch it and have it appear wherever you might be getting your content. So you got to look at it that way and take steps to deal with because.
Yeah, I don’t have a good business example yet, but I’ll happily make a bet with somebody that within two years, we’re going to see this kind of content aimed at business from that disgruntled investor or unhappy customer or whoever it is. So this is so easy to do. And that’ll wrap up this episode of For Immediate Release. Our next monthly long form episode is scheduled to drop on Monday, May 25th.
Neville: Great fun.
Shel: So we’ll be recording on Saturday, May 23rd. In the meantime, we hope you’ll comment. As always these days, all of the comments that we shared in this episode came from LinkedIn. And you’re welcome to look for our announcements of new episodes on LinkedIn or Facebook or threads or blue sky. And we’ll check for comments there, but you can also send them to [email protected].
I’m going to come up with a contest and probably announce it in the May episode for an audio comment. Anybody who submits an audio comment will put your name in a hat and draw a winner and you’ll get something. I’ll have to figure out what. We don’t have FIR merch anymore. Maybe you want to start that again. Now we’ll come up with something. We’ll come up with something. But you can leave an audio comment by attaching an MP3 file to an email.
Neville: Ha ha ha ha.
No, we don’t. Maybe we should, we’ll come up with something.
Shel: Or clicking the record voicemail tab on the right-hand side of the FIR Podcast Network website. You can comment on the show notes that we leave on the FIR Podcast Network. So many ways to leave a comment. And we also have a community on Facebook and an FIR page on Facebook. Any of those places will do. And we also hope that you will leave your ratings and reviews of FIR.
wherever you get your podcasts. And we will be resuming our short midweek episodes next week. Look out for those. Best way to have those is to subscribe to For Immediate Release. And that will be a 30 for this episode of For Immediate Release.
The post FIR #511: Doing AI Governance Right and Still Getting It Wrong appeared first on FIR Podcast Network.
27 April 2026, 7:01 am - 25 minutes 54 secondsFIR #510: Should Companies Embrace Shadow AI?
Employees have long found ways to use software tools to get the job done, even when those tools are not approved. It’s called Shadow IT, but ever since generative Artificial Intelligence hit the scene in 2022, employees have adopted a new version: Shadow AI. The company approves Microsoft Co-Pilot, but employees opt to use their smartphones or personal laptops, along with their personal accounts with ChatGPT, Gemini, Claude, Midjourney, or whatever best suits their needs.
For most companies, this is a problem that needs to be addressed through repeated policy announcements and vigorous crackdowns. One company, though, took a different approach. In this short, midweek FIR episode, Neville and Shel outline what the company did and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains.
Links from this episode:
- The Hidden Demand for AI Inside Your Company
- Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines
- FIR #419: Is Shadow AI an Evil Lurking in the Heart of Your Company?
- The Rise of Shadow AI is a Double-Edged Sword for Corporate Innovation
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz: Hi everybody, and welcome to episode number 510 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. There’s a quiet tension playing out inside many organizations right now. On one side you have leadership teams, IT, legal, and compliance, all trying to put structure, governance, and control around how artificial intelligence is used at work. On the other side you have employees who’ve already moved on. They’re not waiting for official tools. They’re not sitting through pilot programs. They’re not asking permission. They’re opening ChatGPT on their phones. They’re using Claude in a browser tab. They’re experimenting quietly, often invisibly, finding ways to make their work faster, easier, and sometimes better. And in many organizations, this shadow AI behavior is still being treated as a problem — something to restrict, monitor, or shut down. It’s a topic Shel and I discussed on this very podcast in episode 419 nearly two years ago, and it hasn’t gone away.
Neville Hobson: In fact, recent data suggests it’s accelerating. A study last November by Blackfog and Sapio Research found that nearly half of employees surveyed in the UK and US are using unsanctioned AI tools. Even more striking, 60% said they would take security risks with those tools if it meant meeting a deadline. So this isn’t fringe behavior — it’s become normal. An article in the Harvard Business Review this month argues that instead of treating unauthorized AI use as a compliance issue, organizations should see it as a signal — a sign that people are already finding value in these tools, even if the organization hasn’t caught up. We’ll explore that idea in just a moment.
Neville Hobson: The article calls this the hidden demand for AI inside your company. And when you look at it through that lens, the picture changes quite dramatically. Because instead of asking, “How do we stop this?” you start asking, “What are we missing?” The piece goes further than theory. It looks at what one organization actually did when it recognized this dynamic: BBVA, a Spanish multinational financial services company with more than 125,000 employees. Rather than clamping down on shadow AI use, they moved quickly to provide a secure enterprise environment. But more importantly, they didn’t try to control everything from the center. They took a different approach. They identified and empowered what they call “champions” and “wizards” — the people already experimenting, already curious, already building things. They created a network, a community of practice, a way for ideas, use cases, and practical solutions to spread peer to peer across the organization.
Neville Hobson: And the results, at least as reported, are striking: thousands of employees actively using AI tools, thousands of internally created applications, and measurable time savings of hours per person every week. But perhaps the most interesting part isn’t the numbers — it’s the philosophy behind it. The idea that successful AI adoption doesn’t start with a perfectly designed top-down strategy. It starts by recognizing that innovation is already happening, just not where leadership expects it. So the question becomes: do you try to control that energy, or do you find a way to harness it? And that opens up a much broader conversation, one that goes well beyond technology. It touches on leadership, trust, and culture — on how change actually happens inside organizations. And, importantly for communicators, on how you surface, legitimize, and guide behavior that may already be happening under the radar.
Neville Hobson: Because if employees are already using these tools — and most evidence suggests they are — then silence or restriction alone isn’t really a strategy; it’s a gap. So in this conversation, we want to explore that gap. What shadow AI really tells us about organizations today, whether the BBVA approach is something others can realistically replicate, and where the risks still sit, because they have not disappeared. And we should be clear: BBVA may be an outlier. It’s a highly data-mature organization with strong leadership alignment. Many organizations don’t have that foundation. So the question isn’t just whether this works — it’s whether it can work anywhere else. And what that means for the future of work, and for the role communicators play in shaping that future. Shel?
Shel Holtz: Well, a few thoughts, starting with the fact that BBVA has the financial resources to provide a secure environment for those tools that employees are using. There are many organizations whose IT budgets are razor thin and don’t have those resources, so they would need to figure something else out. But I think there’s a caution here worth raising. The numbers from Blackfog are real, even if the framing from the Harvard Business Review is optimistic: 34% of employees using free versions of tools when paid, approved versions exist; 58% of unsanctioned users on free tiers with no enterprise protections. The reframing from threat to signal doesn’t eliminate the exfiltration risk — it reframes how we need to respond to it.
Shel Holtz: Communicators should be careful not to let the BBVA-style narrative become an excuse to ignore governance. The right frame is: harness the demand, don’t suppress it, and build the governance at the same time. Employees using unsanctioned tools and putting secure data and company information into them — that’s a governance risk, and I don’t think we can ignore it. I mean, I think what BBVA did is great, and I think they baked it into some governance while looking at a new approach they could afford to take. But for many organizations, governance is still a requirement.
Neville Hobson: Well, I agree. It’s important and it’s not to ignore by any means. I think, Shel, you fleshed out a little bit the survey that I mentioned, which is actually useful to have that level of detail. But the big question for me is: if this is the picture in many organizations, according to that survey — compared to data previously — this is getting worse, or rather, it’s happening more frequently. People are just going ahead and using what works for them as opposed to what’s the official thing. What is that a symptom of? Maybe a lack of trust? It’s probably a mix of things. And to me, the communicator’s role here seems to be to try and help people on the one hand understand what the tools can do for them, and on the other hand to help the organization understand that we need to address this issue. People aren’t using the approved ones. They’re doing stuff on their own, and that isn’t good.
Neville Hobson: You mentioned security risks. The Harvard article goes into some detail about that, as indeed do the people who conducted that survey. You can just picture the severe risk. We’ve seen examples in recent months of organizations that have suffered from unauthorized use of unapproved software tools — not necessarily generative AI tools, but software certainly. And it’s a big deal. So the question — do you try to control all this and look at ways to stop it? — we asked this very question two years ago in our conversation, and we could probably just insert the recording from then and replay the answer. But let’s talk about it. I don’t think they should try and stop it personally. That’s a fail. There’s no win in that at all, certainly not for the organization. So how would communicators go about that, do you think?
Shel Holtz: Well, I’m not suggesting that organizations crack down on this and become Big Brother, looking at the tools that people are using, especially when they’re using them on their personal phones or personal laptops. But there are definitely things communicators can do. The first is to surface and amplify the internal use cases — not just the fact that people are using these tools, but what they’re using them for. When the security people and the legal people find out that this is actually driving effective work product from these employees, I think there might be more appetite for figuring out a way to bake this into the governance documents and policies the organization has established.
Shel Holtz: And I think giving employees permission narratives — telling them it’s safe to experiment, letting them know how to do it, and suggesting where the guardrails are — matters. So if you are using shadow AI, here are the things to be careful about. Let them know what the risks are and how to avoid falling into those traps. Communicators can also translate the IT and legal guardrails into plain language that doesn’t read as prohibition, because prohibition just leads to negative thoughts from employees about the organization, and then they’ll just continue using what they’re using. And then there’s collecting and routing the demand signal back to leadership. Why are employees using these when there are approved tools around? What are the advantages? So that leadership can make investment decisions that match the patterns of usage employees are actually engaged in. There’s a lot of work here for communicators that goes beyond simply saying, “Don’t do this.”
Neville Hobson: Agreed. And in fact, you can learn from much of what BBVA did, even if you’re not an organization with that established foundation and 125,000 employees. They did things most companies aren’t going to be able to do. For instance, they reached an agreement with OpenAI and deployed a customized version of ChatGPT Enterprise in a secured, exclusive cloud just for the company. The reasoning is interesting. What the Harvard Business Review report says is that the strategic decision was clear: it was more dangerous to have unmanaged, hidden AI usage than to rapidly deploy a managed, secure solution that aligns with existing needs. Most companies aren’t going to be able to do that. So it comes back to perhaps what you’ve just proposed — explaining it to people, the pros and cons, the risks, and so forth.
Neville Hobson: But I think you need more than that, too. Otherwise, you’re going to have significant numbers of people who will ignore it and just go ahead anyway with what they’ve been doing. So maybe elements of what BBVA did — for instance, the network of internal champions and expert wizards to spread knowledge, rather than the formal top-down communication you might expect. You’d have people within the organization who are knowledgeable, who have a history of responsible use themselves, who can help explain to others and help them replicate that. You end up, I think, with steps toward broad compliance that everyone can buy into. That would be helpful, because I can see that the idea of anyone in an organization of whatever size just doing their own thing with whatever tool they like is not a good idea at all.
Neville Hobson: And that isn’t unique to this. We’ve had that kind of conversation in decades past about software. I remember when Hotmail first came out, and when Microsoft Network first came out, the arguments in organizations — and indeed the one I worked for at the time — was, “You’re not allowed to use this on your company laptop, so use it on your own,” stuff like that. That’s definitely not a good thing. So you need to act to address issues like that so that people trust you and respect you and are willing to follow a restriction — or a behavior change, if you like — that would help. It’s interesting, the learning you can get from BBVA’s example, even though you’re not an organization that size with a budget to match. It’s a lot about education. It’s trusting employees, absolutely, as you pointed out, Shel. But I think that’s a two-way street. You need to have a quid pro quo: if you have these freedoms to use whatever you want, you need to do it responsibly. Share your learnings with others in the organization. Things like that. To me, that seems like a really good place to communicate.
Shel Holtz: Yeah, there’s communication happening at BBVA. They have 11,000 active users and 4,800 custom tools being used by those folks. That didn’t happen because the communications department posted an article about them. This was peers talking to their peers about what was working. It validates something you and I have been talking about for years, which is that authentic, lateral, employee-to-employee storytelling beats top-down cascades every time.
Neville Hobson: Precisely.
Shel Holtz: But it is communication. And why wouldn’t that be something the internal communications department jumped on and helped to facilitate — providing the channels for that, rather than the sneakernet that’s probably happening now? And also, because they’re engaged and trying to keep this from happening below the surface, they’re in a position to identify the use cases worth taking to leadership. The Blackfog survey you referenced found that almost 70% of C-suite executives believe speed is more important than privacy or security. So if people are getting things done faster — if you can demonstrate that there actually is productivity improvement happening, and it’s because of the tools employees are using that aren’t approved — I think that’s motivation for leadership to look at either approving those tools or finding ways to allow people to use their own accounts while protecting the integrity of their data.
Neville Hobson: Yeah. The results the Harvard Business Review reports from BBVA are worth noting, even though the scale isn’t what many companies would experience. They talk about 80% of usage of the system they set up coming through direct chat prompting, and the remaining 20% through employee-created GPTs. Now, this is not shadow AI — it was part of the rollout of what they did. But these numbers are quite impressive. Over 83% of employees now use the system every week, averaging 50 prompts per week. That’s above comparable enterprise deployments, says the review, quoting OpenAI. Users report average time savings of two to five hours per week — a number worth noting. More than 4,800 custom GPTs have been created internally, and they’re used three times more frequently than the enterprise average. So they’re ahead of the game in that regard. The article goes into more detail about which departments are more active than others, and so forth.
Neville Hobson: It also prompted a thought in my mind: the other surveys I’ve seen and other reporting on the resistance from leadership in organizations — that isn’t minor. It’s not a little thing. It happens, unfortunately, too frequently. I’m thinking of keystroke logging on employee usage, auditing computers surreptitiously and covertly without telling them, watching which apps they’ve installed — and indeed, probably more common, your company laptop refusing to install things that aren’t on an approved list, or reporting to IT that you tried to install stuff. This is a dreadful situation in organizations. It’s common, but we’re going to see more of it, I think, because that seems to be the way of the world these days on distrust. This is a diminished-trust environment we’re talking about. So in all of that, where do we sit in terms of enabling stuff like this? We can see the advantages of allowing employees to use tools like this. I think the better way is to try to do something within the framework of the organization — not, “Oh sure, go ahead and use ChatGPT whenever you want on any device, no big deal.” I wouldn’t be keen on that. I wouldn’t stop it, but I would look at ways of weaning people off that approach. We have to help them and encourage them to do this. And that, I suspect, is a hard task for communicators — to persuade leadership to do that if the climate in an organization is resistant to it anyway.
Shel Holtz: Well, I think it is a hard sell to leadership, but we have data. We’re supposed to be engaging in two-way communication and facilitating two-way communication. One of the roles of internal comms is listening. And it doesn’t have to be through direct information that you get from people through focus groups or surveys — it could be this Blackfog survey. When 49% of employees are using unsanctioned tools, and 63% think that’s fine as long as there’s no approved option for what they want to do, you may look at that as rogue behavior, but you can also look at it as market research. And communicators are the people in the best position to translate that data into something actionable for leadership. You take that to leadership and say, “Look, this is what’s happening. We’re the ones who can interpret what the behavior means and pass that along to leadership.”
Shel Holtz: I think part of our role is that listening through the data that’s already out there — and maybe what we can determine is going on in our own organizations — and taking that to leadership and saying, “Look, this isn’t going to go away if you crack down on it. It’s not going to go away if you block installation on company laptops. People have their own phones. People have their own laptops and tablets. This is going to continue.” And this isn’t new. I mean, this goes back to the earliest days of computers. I think I’ve mentioned this once or twice on the show, but I needed to produce charts and graphs in the mid-‘80s, and I wanted to use Harvard Graphics because somebody had shown it to me and it was what worked, and the company had a different program that was terrible. So I just used Harvard Graphics. I bought my own copy and installed it. There were no blocks back then — you put the floppy disks in the drives and it installed. People are going to do what they need to do to get the job done. Maybe some will pay attention to what the official rules are, but I think the governance needs to be flexible enough to adapt to this. I applaud BBVA for what they did. Again, I don’t think every organization is in a position to replicate it, but I think you can take lessons from what they did.
Neville Hobson: You can. Not everyone can roll out what they rolled out — enterprise licenses and so forth — but some of the things they went about, and how they went about them, definitely. One thing the review article points out quite strongly — a very, very good thing — is that they say, toward the end of their conclusions, that in whatever you do, there must be a hard human-in-the-loop rule. Human employees should always own the work. There should not be direct writes to core systems. Internal GPTs need quality scores and guardrails. They specify scope and context, include samples, and so on. This is simple, scalable, and non-bureaucratic.
Neville Hobson: So that’s something that kind of ties back into this emerging phrase — if it’s even emerging — of human-centered AI. Let’s look carefully at this. It’s about people first, technology second, and the human needs to be in the loop. The “hard human,” as the review calls it — I interpret that as meaning someone who’s actually cognizant, aware, and able to act upon things that matter, to keep humans in the loop, to own the work, not the technology. You’ve got to think about things like that. And I think for communicators, that’s an important aspect of what they do — having in mind that element that is about the people first. So when you’re trying to persuade leaders to take a course of action you’re recommending, this needs to be in your mind too: that the humans need to be in control.
Neville Hobson: I have to say, this is great. I love stories and examples like this. I love them more than the ones that talk about disasters, although those are useful to know about as well. Yet I feel, as communicators, we have a constant, constant task on our hands to explain this to people in organizations, to help others understand. I think this is a good example — the shadow AI element. For me, if I were actively involved in an organization as the communications person, I’d be looking at: how do I persuade people not to do that? How do I persuade people to use the approved stuff? But at the same time, how do I persuade the leaders to make sure they offer employees stuff that actually works, that’s in line with their expectations, all that kind of stuff? There’s a bit of a job on their hands. And if budgets get in the way, then you’ve got an even harder job. But hey, that’s what we’re here for. That’s part of what we have to do.
Neville Hobson: These are good examples you can learn from. There are elements you could start on. And I think, like most things, Shel, you need to say, “OK, fine — this idea has a dozen constituent elements, and let’s just start with two.” So you don’t try to think, “Oh my god, this is a massive project. How on earth can we do this?” You look at just a couple of things. I like another point the Harvard Business Review makes: ensure that managers know what they’re doing. You can’t expect managers to be persuasive in encouraging others to use AI if they’re not good at it themselves. So there’s another element — you need to train them well, says the Harvard Review. At a minimum, they should learn how to write staffing notes, sensitive communications, and KPI reviews with AI help. So there are some things you could do straight away as a communicator in an organization. I’d say: good luck and godspeed, and it’ll all work out in the end.
Shel Holtz: Yeah, a manager’s role in all of this is probably an episode in its own right. I would just reiterate the point you made about the human in the loop. This is a governance element that should be overarching — not applying just to shadow AI, but to all use of AI in the organization. It should be a primary consideration in governance, not to turn things over to AI. Otherwise, you end up with fake citations going out to clients that paid a million dollars for your work — another little slap on Deloitte’s wrists. And that will be a 30 for this episode of For Immediate Release.
The post FIR #510: Should Companies Embrace Shadow AI? appeared first on FIR Podcast Network.
21 April 2026, 12:54 am - 20 minutes 49 secondsFIR #509: Does Corporate Content Need Copyright Protection?
When bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content — and win, at least initially. That’s because the systems platforms used to validate copyright claims are automated and configured to treat whoever files first as the rightful holder. The result: musicians like Murphy Campbell, a folk artist from North Carolina, lose both revenue and control of their own creative identity.
The same mechanism works just as well against any organization that publishes audio or video content online. In this midweek episode, Shel Holtz and Neville Hobson break down how the scam works, why it matters to communicators, and what you should be doing right now — before an incident forces your hand.
Links from this episode:
- AI Cloned Her Voice, Then Claimed Her Songs
- ‘This Is Not Me’: Inside the AI Scams Driving Musicians Crazy
- A Folk Musician Became a Target for AI Fakes and a Copyright Troll
- A traditional musician became a victim of AI imitations and a copyright aggressor
- ‘AI slop’: Emily Portman and musicians on the mystery of fraudsters releasing songs in their name
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville Hobson: Hi everyone and welcome to For Immediate Release, this is episode 509. I’m Neville Hobson.
Shel Holtz: And I’m Shel Holtz. And today we’re going to talk about something else that communicators need to worry about. I think we need to develop a worry list for communicators. This one starts with a tale about a folk singer from the mountains of Western North Carolina. She’s named Murphy Campbell. She plays banjo and dulcimer and records old Appalachian ballads, some of them written by her own distant relatives. And she posts videos of herself performing in the woods. She has about 7,800 monthly listeners on Spotify. And she is, as Shelly Palmer put it in a recent column, exactly the kind of artist the copyright system was designed to protect.
In January, some of her fans started messaging her about songs on her Spotify profile that she had never uploaded. Someone would have taken her YouTube performances, run them through AI voice cloning tools, and posted synthetic versions of her songs under her name on streaming platforms. These fake tracks, to put not too fine a point on it, were really bad. Her dulcimer sounded like — and these were her words — a warbled metallic mess. Her voice had been deepened and auto-tuned into what she called a bro country singer. But here’s where it gets interesting for those of us in communications, because that’s not the end of the story. It didn’t stop at impersonation.
Whoever uploaded the fakes through a legitimate music distributor called Vydia (V-Y-D-I-A) then filed copyright claims against Campbell’s original YouTube videos — the very videos the AI had been trained on. Because YouTube doesn’t use humans to review initial copyright claims, Campbell stopped earning revenue on her own content. That revenue started going to the person who had filed the copyright claims.
She described herself as being in a weird limbo where “I’m telling robots to take down music that robots made.” Shelly Palmer called this a reverse copyright scam, and he confirmed, speaking to other content creators off the record, that this is more common than he might have believed.
Now, I know what you’re thinking — music streaming platforms, artists, what does this have to do with me? And the answer is everything. Because the mechanism that elbowed Murphy Campbell out of earning royalties for her own music will work just as well against any organization that publishes content on platforms with automated enforcement systems. That is virtually every organization that has a YouTube channel, a podcast feed, or any kind of public video or audio presence.
So here’s the structural problem as Palmer frames it. The copyright system we have was built on a foundational assumption that the first entity to register a claim is the rightful owner. That assumption held when human creativity was the bottleneck. It breaks completely when AI can generate a synthetic version of any content in seconds using any voice. Think about what your organization puts out there publicly — executive speeches, earnings calls, thought leadership videos, branded audio, training content, podcasts, content marketing pieces. Every one of these is a potential training data set for someone who wants to clone your voice, your leaders’ voices, and then upload a synthetic version through a low-cost distributor. We’re talking about something that costs $25 to $90 a year. Then they file a claim against your legitimate content before a human ever reviews it.
Neville Hobson: (pause)
Shel Holtz: That means the system is going to see them as the first one to file that claim and assume they are the legitimate copyright holder. Now, Rolling Stone confirmed that this isn’t an isolated case. Paul Bender, Veronica Swift, Grace Mitchell — these are just a few of the artists who have faced the same attack. One musician even ran an experiment he called Operation Clown Dump, uploading fake content under his colleagues’ names across platforms. His success rate was 100%.
So what do communicators need to do? First, audit your public content footprint. Do it now, before an incident forces you to. Know what you’ve published, where it lives, and what revenue or visibility is attached to it. Second — and here’s something that’s new for a lot of communicators — register your copyrights. Formal registration is the prerequisite for meaningful legal recourse in the United States. Third, build a rapid response protocol for platform disputes. The organizations that survived these attacks quickest were the ones who knew who to call and knew what to say. And fourth, have this conversation with your legal team today, not after something goes wrong.
Murphy Campbell eventually got Vydia to withdraw its claims, but only after her story went viral. Most organizations won’t have that option. Your story won’t go viral. The bad actor doesn’t need to win permanently — they just need the automated system to act before you do. And that is the lesson, and it’s one we’d better learn from musicians before we have to learn it the hard way.
Neville Hobson: Extraordinary, isn’t it, Shel? I guess you could call it a new phenomenon, only in the sense of the speed with which this can be done. I must admit, I’m astonished that the system is such that the first person to file the copyright claim is assigned ownership. Maybe that’s similar here in the UK — every jurisdiction is different, of course — but that’s rather unsettling. It obviously goes back to a time when people weren’t exploiting the system the way they are now. There are similar examples here in the UK of this kind of activity where people unwittingly find that their content is being misused and misrepresented. And although no major artists — though I may be wrong about that — I did see an article noting that YouTube allows some users to clone the voices of stars like Charli XCX and Sia, with their permission. But unauthorized AI covers of artists like Harry Styles — hundreds of thousands of copies — is a widespread phenomenon, and one that barely registers in mainstream news.
A number of artists, a bit like your example of Murphy Campbell — there’s one I’ve heard about, Greg Rutkowski, a Polish-born artist known for his work on Dungeons & Dragons, who found his style being used in over 400,000 AI prompts, raising serious concerns about the obsolescence of human artists. And to your point about what communicators should watch out for: your corporate communication messaging that’s in audio, your CEO on an earnings call that’s been recorded and distributed. So never mind video — audio alone, at that scale of 400,000 AI prompts, is not a good situation. If you project the thinking out, this is utterly relevant to anyone publishing audio or audiovisual content online.
I find it astonishing that some platforms, notably Spotify — which features prominently in a lot of reporting on this — are being used to literally steal someone’s intellectual property by replicating it. And I think it reinforces the point that registering copyright isn’t an idle exercise. It’s something that should be front of mind, and it does other things for you as well as the owner of the property.
Something as simple as displaying a current copyright notice on your website — it’s remarkable how many sites I come across that still show “Copyright 2016,” never updated. Displaying a current notice signals that the business is active and its information is up to date. There are also tools to protect against AI scraping, though how effective they are is still unclear. Creative Commons licensing is another option, setting out the terms under which people can use your content — though that requires everyone to play by the rules, which frankly isn’t always the case these days.
Nevertheless, you’ve got some protection — or at least the peace of mind that you’ve taken steps. But it really is quite extraordinary, isn’t it, Shel? When I looked into what’s happening in the UK, I came across a recent movement — over a thousand UK musicians, including Paul McCartney, Annie Lennox, and Damon Albarn — who released a silent album to protest proposed legislation that would allow AI companies to train on copyrighted material without consent. It struck me as a real head-scratcher: why would a government enable that to happen?
Shel Holtz: Probably very effective lobbying from the AI companies, I’m sure, is behind that.
Neville Hobson: No doubt, no doubt. But there are other things going on — organizations like the Musicians’ Union and Equity campaigning for better copyright protection, consent, and fair compensation for creators. It’s not getting much mainstream coverage, but activity is happening behind the scenes. Nevertheless, the example of Murphy Campbell and others represents a genuine threat that you need to be aware of if you’ve got content online that matters to you. Never mind the “they shouldn’t be doing this” argument — the point is, if it’s important to you, have you thought about this?
Shel Holtz: If you think about the days before the web, copyright wasn’t something most people had to worry about that much. Professional artists with record deals had people to handle it. Same with authors — someone like Stephen King never had to worry that somebody would be the first to file a copyright claim under his name and siphon off his revenue. But now you have artists who don’t get record deals — like Murphy Campbell — publishing on YouTube and Spotify, building small followings, and making a reasonable living. This is the working class musician concept we talked about, oh, it’s got to be 15 years ago now.
The fact is, you can use Spotify and YouTube to build a following, play some small clubs a few times a year, and make enough to pay the mortgage and put your kids through school. You’re not going to get the penthouse suite from playing to 100,000 people, but you can make a living. But this has also opened up the ability for bad actors to take advantage of that. And now with AI able to reproduce your voice and create new music at scale, all the pieces are in place for this kind of theft. Unless you’re able to get your story to go viral — as Murphy Campbell did — it’s not clear what you can do, because YouTube and Spotify have set up systems that automate this process with no human review. When you used to register with the copyright office yourself, a human was checking. So it’s not likely most organizations have revenue-generating content online — though I’m sure some do, and I’ve actually argued there are ways to use content to generate revenue.
For example, I’ve always loved the idea of a Webcor YouTube video series called “Building for Girls,” where our employee resource group, Women of Webcor, does a five-minute lesson every two weeks on construction to get young girls interested in STEM and engineering careers. Get enough views and YouTube starts paying you. If you don’t copyright-protect that content, someone can come along, produce similar videos, claim the rights, and suddenly your revenue is going to someone else. But even if you’re not producing revenue-generating content, there are other reasons to ensure nobody else can claim ownership of what you create — especially as content marketing demands more and more output. So yes, register that copyright.
Neville Hobson: Yeah, it made me think about watermarking for written content — though I’m not sure there’s something truly effective offering the same protection for audio and video yet. And even if there were, you’ve got situations like Murphy Campbell’s, where it’s her style and tone — the whole persona that defines her music — that’s being copied. And you don’t know about it until strange things start happening: your revenue drops, someone says “I love that new song you just published,” and you discover it wasn’t you. Or you read a review and think, wait — I didn’t write that.
Shel Holtz: Or “I hate that new song you published” — in Murphy Campbell’s case.
Neville Hobson: Exactly. I’m sure people are working on the technology. You’ve got digital rights management, which isn’t new, but I’m not sure it helps here because the issue isn’t copying your content outright — it’s imitating or repurposing it at scale. Hundreds of thousands, or millions of instances. I think the platforms need to do far more than they currently are. It’s a similar argument to what we’re hearing here in the UK about Meta and X doing nothing effective to protect children. This is in the same territory, and it needs a lot more from those platforms — who are making serious money throughout all of this. As to what exactly “more” looks like, I’m not entirely sure, but they need to do more.
Shel Holtz: Yeah, and they probably won’t until there are some high-profile, visible court cases that create real reputation issues for them — then they’ll take action. The easy thing to do right now is simply register the copyright. That’s your protection. When someone imitates you, or claims the content you produced is theirs, you have legal standing to act. That’s why you need to have this conversation with your legal team.
But I wouldn’t wait for either the platforms or the government to do anything. They’re both reticent to act. You have the ability to do something about this right now, and it’s just a matter of working with your legal team and filing those copyrights.
Neville Hobson: Yeah, exactly. And even using Creative Commons licensing — if you’re an individual without all the formal resources, but you have a niche following, even that’s a start. Keep a record of every iteration of everything you’ve created — “I did this in 2017, here’s proof, backed up here.” That gives you something to stand on, a way to demonstrate that you can act if someone uses your content. And if you don’t do this, there’s another consequence worth considering: your original content gets buried in search results because the AI-generated imitations have somehow accrued better signals to rank higher. That kind of pollution from AI slop is its own problem.
Shel Holtz: Yeah — and then people stop paying attention to your content altogether because they’re so fatigued by the AI slop that they tune everything out. But at least this one has a solution communicators can follow: something new to add to the copyright to-do list. And that will be a 30 for this episode of For Immediate Release.
The post FIR #509: Does Corporate Content Need Copyright Protection? appeared first on FIR Podcast Network.
14 April 2026, 7:26 pm - 20 minutes 39 secondsFIR #508: Inside AI’s Human Raw Material Supply Chain
When workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies that hire gig workers are shifting from delivering food or sharing rides to creating content to train AI systems. This raises various communication and ethical issues. Neville and Shel explain what’s happening and discuss the implications in this short midweek episode.
Links from this episode:
- The jobs AI can’t do – and the young adults doing them
- Thousands of people are selling their identities to train AI – but at what cost?
- The gig workers who are training humanoid robots at home
- Gig economy becomes new AI training ground
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz
Hi everybody and welcome to episode number 508 of For Immediate Release. I’m Shel Holtz.Neville Hobson
And I’m Neville Hobson. Over the past few weeks, I’ve come across a set of stories that all point to something quite striking — not just how AI is evolving, but how it’s being built. Increasingly, the raw material behind AI isn’t just data scraped from the web. It’s us: our voices, our movements, our everyday lives, and increasingly, our identities. There’s a new layer of the gig economy emerging. We’ll explore this in just a minute.People are being paid, typically in small amounts, to record themselves walking down the street, having conversations, folding laundry, even just going about their day. That data is then used to train AI systems because those systems need examples of how people actually speak, move, and interact in the real world. In one case, delivery drivers in the US are being redirected to film tasks for robotics training. Platforms are turning existing gig workers like delivery drivers into distributed data collectors for AI. In another example, people are selling access to their phone conversations through apps that pay contributors to upload voice and text data. And in yet another, workers are strapping phones to their heads to record household chores so humanoid robots can learn how to move. The work is global, fragmented, and often invisible, with workers spanning Nigeria, India, South Africa, the US, and far beyond. Humans are no longer just users of AI — they are raw material suppliers. In China, there are even state-run centers where workers wear virtual reality headsets and exoskeletons to teach robots how to carry out everyday physical tasks. What we’re seeing is the rise of what you might call data labor, where identity itself becomes part of the work.
There’s a clear driver behind it. AI companies are running out of high-quality training data. The open web isn’t enough anymore, and synthetic data has its limits. So the industry is turning to something else: real human lived experience. Because if you want a robot to understand how to load a dishwasher, navigate a room, or interact with objects, you need to see humans doing it at scale.
But there’s an interesting contrast here. One of the stories highlights a 23-year-old in the US, a guy called Cale Mouser, who earns well into six figures repairing diesel engines. It’s something he’s developed great skill in doing. His work depends on judgment, experience, and problem solving in the real world — things that don’t easily translate into data. So while some people are being paid small amounts to generate data for AI systems, others like Cale Mouser are building highly valuable careers precisely because their skills can’t be reduced to it. And that contrast feels important.
Because on one level, this new kind of work does create opportunity. For some people, especially in lower-income regions in the Global South, this is real income — paid in dollars, flexible and accessible. But there’s another side to it. Because what people are actually selling isn’t just time, it’s identity: their voice, their behavior, their presence in the world. And often once that data is handed over, it’s gone — permanently licensed, reused, repurposed, potentially in ways the individual never sees or understands.
So you have this asymmetry: individuals earning small immediate payments while companies build long-term, highly valuable AI systems. Perhaps it’s a new version of the Mechanical Turk for the AI era. And that raises a deeper question. What does it mean when the inputs to AI are no longer abstract data, but pieces of human identity? When the training set is not just content, but behavior, voice, and presence? And when those pieces can be reused, replicated, and scaled, often without the individual’s ongoing knowledge or control? Many platforms grant royalty-free perpetual licenses, where workers get paid once and lose control forever. There’s potential for deepfakes, identity theft, and misuse without consent. And perhaps more uncomfortably, what does it mean when people are contributing to systems that could automate their future jobs?
For communicators, this feels important because this isn’t just a technology story. It’s a story about trust, consent, transparency, and how organizations explain what they’re doing with AI. If AI ethics lives anywhere, it’s here — in how these systems are built and how that’s communicated. So the question to explore — one of the questions to explore, perhaps — is this one: Are we comfortable with an economy where identity itself is becoming labor? And if not, what responsibility do organizations and communicators have in shaping it?
Shel Holtz
It’s a big story with a lot to consider. On one level, it seems like the high-tech version of the sweatshops where high-end fashions were made — Nike shoes, for example — with people paying premium prices to get those products while the people making them are earning a pittance in factories with long hours and terrible working conditions. And then you add onto it the identity issue. So it’s something that I think — something at least I hope — we’re going to be talking about for a while.In terms of the AI element, what this suggests is that the gig economy didn’t go anywhere when AI came along; it just became the training ground for AI. And it’s interesting that the workers who are being squeezed out of knowledge jobs are selling their voices and their movements to build the systems that squeezed them out. Because where do a lot of these people who are being laid off because of AI go? Well, they go drive for Uber, they go drive for DoorDash. And you do that long enough and you get really accustomed to the idea that they send you a task, you go do that task, and you get paid for it. So if that task shifts from picking up a meal at a restaurant and delivering it to somebody’s house to going to your own house and washing your dishes because that’s what they want to capture on video — it’s the same thing. You’re getting a task on the app. You’re doing the task and you’re getting paid for it. So I think for a lot of people, this is going to be a fairly easy shift, and they’re not going to think a lot about what’s happening to the information and the content that’s being created with their movements and their voices, which is now being shared and used to make a lot of money for the people who are paying a pittance to these folks.
So I see three issues here that connect directly to organizational communication. The first is consent and transparency — and I’m talking about inside organizations — because companies are already deploying AI tools trained on data that their own workers have supplied, and sometimes they’ve supplied this data unknowingly. The ethical and reputational questions that employees are going to ask are questions like: Was my voice used to train a bot that you activated in order to replace my friend who sat next to me and I had lunch with? And regulators are going to end up asking these questions too. So communicators really need to be out front with clear internal messaging about what data employees generate and how the company is using it. Let’s talk about that before I hit the other things that popped into my mind.
Neville Hobson
Yeah. I mean, the transparency element is key. That’s not new — that’s always been the case. But how organizations should communicate this may not be as simple as it might seem. I mean, the example you mentioned is an interesting one: a company uses data from its employees without them knowing. Well, let’s say — don’t do it like that. Don’t do that. You need to disclose if you’re doing this. Surely that is an ethical issue: if you don’t tell them and you go ahead and do that, that’s not what you should be doing. So there’s an easy one to address.The other element, which is also ethics-related, is: is this whole thing ethical if participation is driven by economic necessity? Whatever reason you might give — we need to get an edge on the competition, whatever — you’re still up against that element.
That’s the big-picture ethics question. But common sense tells you how you should do this. Should individuals be compensated long-term for use of their data? On the one hand, you might say, fine, let’s tell everyone: your data may be used — your day-to-day interactions with colleagues, the recordings of your conversations on our internal Teams tool — that’s kept. So the employee might say, I’m okay with that, but I want to be compensated for it. And now there’s an interesting position.
Shel Holtz
You mean like as if they’re licensing it?Neville Hobson
Exactly. And the organization might retort —Shel Holtz
Well, the organization might retort: you are being paid for it. You’re being paid a salary. You come in here every day, you do your work. Read your employment agreement. I mean, this is kind of like — what was it? Velcro or Post-it notes? Maybe both — where the person who invented it never made a penny off the royalties because they were an employee of the company and the work product belonged to the company. I think organizations might be able to make the same argument here.Neville Hobson
They could. But they’re not sure whether they should, just because they could — because the climate is very different today from those examples back in the 1960s. So you’ve got to think about things like: if we don’t do this right, are we going to get an exodus of employees who are going to go work for a company that treats them better in this same context?Shel Holtz
Well, now you have the economic environment and the hiring situation where a lot of companies are trying to avoid hiring. They’re also trying to avoid layoffs, but they’re trying to avoid hiring. It’s pretty flat out there right now — it’s definitely a buyer’s market. So I don’t know that I would leave an organization because they’re using my data unless I already had another job lined up, because they’re hard to find right now.Neville Hobson
I agree. It’s slightly a hypothetical scenario, but I think it is worth recognizing that it could well come to that. From the research on those articles — and some other things I saw — there’s already a strong imbalance of value and control between the individuals who provide the data and the companies who are getting that data and making economic use of it. AI companies rely on real-world human data because of data scarcity. So there’s a challenge on both sides of the argument: they need the data, but there’s probably a finite amount that employees can provide, so they have to look elsewhere too.And the thing is, a new economy is emerging where people monetize their identity and behavior voluntarily. In the case of the examples we heard about — the guy in Uganda filming himself walking down the street — and then the flip of that, as I mentioned, the example of young people in America, which the Guardian has a really good analysis of, who have skills that cannot easily be translated into something AI can do. The key element in that part of the discussion was about the skill this young guy has — 23 years old. It’s not unique, but he’s got a skill that isn’t just “I know how to repair a diesel engine.” It’s that he can, at a glance, literally see what’s wrong and already formulate the six things he needs to do to fix it. And that is valuable. He’s earning $150,000 a year already in salary doing this, and he’s 23 years old.
So there are other examples mentioned in that Guardian piece too that are interesting. On the one hand, you’ve got gig economy workers like DoorDash drivers doing what they’re doing. On the other hand, you’ve got people like this guy developing a career not related to AI at all — a skill that cannot easily be replicated by AI. So that’s part of the landscape. I’m not sure where all of that fits within this, Shel, to be honest, but it’s part of the picture.
Shel Holtz
Yeah. I think it was MIT that came out with a report not too long ago saying something like 93% of jobs are AI-safe — and there were a lot of people saying this really paints a different picture from what we’ve been anticipating. I don’t know how accurate it is. But in the meantime, there are AI companies working very hard to elevate these systems to the point where they can do some of the work that currently might be considered AI-safe. I think for many jobs, it’s probably just a temporary designation.I raised the issue of employees inside the organization. Those gig workers are another issue for organizational communicators, because these workers — the ones very accustomed to having the app tell them to do a task, doing the task, and getting paid for it — these folks aren’t covered by traditional internal communications. Organizations relying on gig workers and contracted labor, and increasingly if your AI tools were trained by them, have a stakeholder relationship they may not have a communication strategy for. I’d argue they don’t have a communication strategy for it.
I’ve often made the distinction between internal communications and employee communications. Employees are the people who come in and get paid by you directly, whether salaried or hourly. But you have other internal stakeholders, and we develop strategies for them — the contractors embedded in our organization. I work in construction; we have subcontractors; there are ways the organization communicates with them. There are all kinds of internal stakeholders, and these gig and contract workers are now among them. We should figure out a way to communicate with them, talk about our ethical use of their data, and engage with them in ways that are meaningful, useful, and produce positive results.
Neville Hobson
Yeah, makes sense. You had a couple of other points you were going to mention. What’s the next one?Shel Holtz
Just one other, actually, and that’s about keeping the human in the loop. A lot of companies, in order to feel good and look good as they move into the AI world, are positioning human oversight as really important. But what the stories we’ve been talking about reveal is that humans are raw material — physically, biometrically, behaviorally. Workers aged 22 to 25 in the most AI-exposed occupations — things like paralegal work, for example — have experienced a 13% decline in employment since 2022, which is the year OpenAI released ChatGPT to the public. On the other hand, employment for less exposed or more experienced workers — think about your 23-year-old diesel mechanic — has been steady or in some cases even increasing.So organizational communicators talking about AI as just augmenting human workers need to be careful, because I think increasingly we’re going to hear stories about how that isn’t actually true, particularly for this younger demographic. We have to be honest about that asymmetry. I mean, whose labor is augmenting whom?
Neville Hobson
Yeah, I get that. It does make sense. It’s an issue that embraces communications, ethics, and trust more than anything. But at the heart of it, there is the technology aspect. I’m thinking about other things that you and I have discussed in previous episodes that are kind of adjacent to this issue — where if you analyze what the real issues are, they tend to be a mixture of communication, ethics, and trust. So that’s a good starting point for communicators who might be wondering how the hell to address this: communication, ethics, and trust. Work out how you can develop the procedures that embrace and recognize the importance of those things and execute them inside the organization.I agree with the premise in all the articles we’ve linked in the show notes that a new data labor economy is emerging where people monetize their identity and behavior and, in the case of the Global South in particular, don’t think twice about it. Employers have a duty of care to recognize what they need to do to bring that group into their structure — one where communication, ethics, and trust play the bigger role.
Shel Holtz
Yeah, absolutely. And I think there are a number of places to look. You don’t want to be the next organization to have it disclosed that you have exploited labor producing the data that you need, because those scandals were pretty difficult for the fashion companies that went through them. Also, one of the things that generative AI models are really good for is scenario planning.Neville Hobson
Ha!Shel Holtz
And for your organization, in your industry, with your markets, it wouldn’t hurt to do some scenario planning about who the stakeholders are that you should be communicating with, and what the challenges are going to be both internally and externally, and start developing some communication strategies. And that’ll be a 30 for this episode of For Immediate Release.The post FIR #508: Inside AI’s Human Raw Material Supply Chain appeared first on FIR Podcast Network.
8 April 2026, 8:16 pm - 25 minutes 37 secondsFIR #507: Should Nobody Really Ever Write with AI?
Take a stroll through LinkedIn. You’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write for them. While that case isn’t hard to make for professional writers, there are countless professionals in other fields who struggle with writing, never trained to be writers, yet now have to write everything from emails to reports as part of their jobs. Should they really sweat for hours over wording, time they could be devoting to the core areas of subject expertise, when AI can produce content that is cogent, clear, and direct? In this short mid-week episode, Neville and Shel look at the trends in using AI for writing, despite the plethora of opinions from the pundits.
Links from this episode:
- Meet the Tech Reporters Using AI to Help Write and Edit Their Stories
- Meet the Journalist Using AI to Write Stories
- How Journalists Feel About AI
- Muck Rack’s 2026 State of Journalism Report Finds 82% of Journalists Use AI
- AI Doesn’t Reduce Work—It Intensifies It
- Is Writing with AI at Work Undermining Your Credibility?
- How We’re Using AI
- Review of ‘Using Artificial Intelligence in Academic Writing’
- Best Practices for the Effective Use of AI in Business Writing
- AI Tools for Business Writing
- 5 Ways to Instantly Level Up Your Communication Using AI Tools
- Charlene Li and Katia Walsh demonstrate the right way to build a book with AI help – Josh Bernoff
- The Truth About Writing a Book on AI
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone and welcome to For Immediate Release episode 507. I’m Neville Hobson.
Shel: And I’m Shel Holtz. And if you spend any time at all on LinkedIn, you’ll see the degree to which anti-AI sentiment is ramping up. A lot of it’s aimed at using AI for writing and how absolutely wrong that is. Yet just last week, on the same day, Wired Magazine and The Wall Street Journal both published articles on reporters using AI to help write and edit their stories. So today, let’s talk about using AI to write.
Specifically, is it okay for employees to use AI to help them write for work? And my answer is not only is it okay for many employees, it might be one of the most genuinely useful things AI can do. Here’s the framing I would push back on. When we talk about AI writing assistants, we tend to picture a journalist or a marketer or a communications professional, someone whose craft is writing, it’s what they’re paid for, handing their keyboard over to a robot. And for those of us who are professional writers, that raises legitimate professional and ethical questions. But that’s not the population we’re talking about when we’re communicating AI adoption in most organizations. Think about who actually has to write at work. Engineers document processes. Product managers write status updates. Safety officers draft incident reports.
Shel: Finance analysts compose budget justifications. Scientists write up findings for non-technical stakeholders. These are not people who chose their careers because they love writing. Writing is a tax they pay to do the work they actually care about. And many of them pay that tax really, really badly. The idea that a structural engineer should produce elegant prose unaided is the same logic as saying a communications director should coordinate the concrete mix for a construction project. We don’t expect that. So why do we expect every knowledge worker to be a competent writer? Muckrack’s 2026 State of Journalism report found that 82% of journalists, professional writers, people whose job this is, are now using at least one AI tool. That’s up from 77% the year before.
If the people whose professional identity is tied to their writing are using AI tools, it shouldn’t surprise us that everyone else is too, or that they should. Now the research does tell us something important about how to use these tools. A University of Florida study of 1,100 professionals found that AI tools can make workplace writing more professional.
But regular heavy use can undermine trust between managers and employees, particularly for relationship-oriented messages like praise, motivation, or personal feedback. The study found that employees are more skeptical when they perceive a supervisor is leaning heavily on AI for those kinds of communications. Now that’s a meaningful finding and it’s exactly the kind of nuance internal communicators need to help their organizations understand.
It’s not an argument against AI writing assistance. It’s an argument for knowing when it’s appropriate. Purdue Business School Professor Casey Roberson, who literally wrote one of the first business writing textbooks to address AI, puts it this way: AI is a great tool for brainstorming when you’re stuck, for outlining and structuring documents, for revising drafts to improve clarity and tone, but it should not be used for confidential information, and using it to write first drafts can stifle creativity and critical thinking. The Wharton communication program makes a similar distinction. Their guidance frames AI tools as powerful and skilled hands for the right task, valuable for brainstorming, editing, improving conciseness, and anticipating challenging questions, but a liability when used as a substitute for your own thinking, your own knowledge of your audience, and your own credibility.
So what’s the practical guidance for internal communicators trying to help their colleagues use AI responsibly in their writing? First, make the distinction between communication types explicit. Routine informational writing — process documentation, project updates, meeting recaps, technical reports — that’s where AI assistance is most defensible and most valuable. That’s exactly where the trust risk is lowest and the productivity gain is highest. Conversely, messages that carry relationship weight, like a manager recognizing someone’s contribution or a leader addressing a team through a difficult moment, that deserves a human voice. Help your employees understand that difference.
Second, reframe the conversation around who’s actually writing. A systematic review published in the International Journal of Business Communication found that AI can significantly help with idea generation, structure, literature synthesis, editing, and refinement. Essentially all the phases of writing that non-writers find most daunting. AI isn’t replacing a writer’s voice. In many cases, it’s giving non-writers a voice they otherwise wouldn’t even have.
Third, be honest about the nuance inside the journalism conversation. The Columbia Journalism Review published a fascinating piece where journalists across major newsrooms shared their practices. Nicholas Thompson, the CEO of The Atlantic, described using AI the way he’d use a fast, well-read research assistant who’s also a terrible writer — helpful for checking consistency, flagging chronological issues, examining logical claims, but not for the writing itself. Amelia Daly, a senior reporter at VentureBeat, put it this way: AI helps her productivity, but she refuses to use it to write because writing is how she maintains trust with her readers. That distinction — AI as research and process support versus AI as voice — maps directly to the guidance you should be giving your colleagues.
I read one other reporter in one of these articles who said he actually does use it to write because he didn’t become a journalist in order to write. He didn’t like writing; he liked reporting. So he did all the other work and then lets the AI produce the writing.
And here’s the thing I’d leave your employees with because I think it gets lost in this debate. Wharton’s communication faculty make the argument that writing is thinking, that when you rely on AI for drafting, you don’t know your content as deeply as you should, and you lose the nimbleness to adapt when the moment requires it. And that’s true. But for an engineer who agonizes over every sentence of a procedure document, who spends four times as long on the writing as on the analysis,
Shel: AI doesn’t replace their thinking. It clears away the friction so their thinking can actually reach the page. For internal communicators, this is a genuinely useful message to take to your AI adoption rollouts. AI writing assistance isn’t about cutting corners. It’s about removing a barrier that prevents good ideas from being communicated clearly while still insisting on the judgment, authenticity, and relational awareness that only human beings can bring.
Neville: Yeah, it’s a big topic, I have to admit. And I think of it from not the employee communication point of view so much. That’s pretty a major part of it, I think, major usage. Is anyone writing, in fact? Whether you’re in public relations, whether you’re a journalist, et cetera, people who need to write as part of their roles is what’s in my mind mostly.
I’m also drawn by a very good analysis by Josh Bernoff. You and I interviewed Josh, what, two, three months ago. He wrote an assessment of Charlene Li’s new book, Winning with AI, which she used AI extensively in the creation of the book. Worth pointing out that the book — the AI didn’t write any of the content.
She and her co-author, Katia Walsh, talked about the way in which they divvied up the work. And the AIs, plural, did research amongst other tasks, too. But Josh did a lengthy post setting out all the areas where they found AI useful and AI not so useful. And it struck me reading Josh’s post and then also Charlene’s postscripts, as it were, in the book itself, which I am reading, by the way, that this would apply to anyone writing, not just would-be book authors, in my view. Whether you’re writing fiction or nonfiction doesn’t make any difference. Whether you’re writing a report, whether you’re writing an article or for a blog or for a newspaper, whatever, doesn’t matter. These principles, I think, apply to that. And it’s not so much about whether your role in your organization or in your job is to do with this and you’re not very good at writing. It’s not so much that. It’s more focused on those whose job is writing, or writing is part of their job in some form.
So there are a number of things that I took from it. But to go to the main point about Charlene’s book Winning with AI, AI wasn’t doing the writing, as I mentioned. It was supporting the thinking. It handled things like the research, summaries, the structure, which speeds everything up. But the ideas, the voice, and the judgment — that all stayed firmly human. And to quote from Josh’s post, he says that the two authors describe how they used Claude to structure the content, ChatGPT to create a custom GPT with four years of their work, which it used in a sense as a training aid, Perplexity to do the research, and Gemini to search a vast collection of interview transcripts. It’s much more detailed than that. It’s well set out in the book. And I thought, that’s interesting. That’s a very intelligent way to go about using different AI chatbots for different purposes on your projects.
So three things I took from this, and this applies to all the points you made, Shel, and it will repeat some of those, but it just shows you that this is how you need to think of this. First, AI works best as a thinking partner, not a writer. Like I said, the two authors used AI as a note taker, researcher, brainstorming partner — essentially a third collaborator. It helped them structure the ideas, surface insights, and challenge assumptions, and they did not rely on it to produce the final prose.
The second point: it saved time on the drudge work, as Josh called it, but it requires human judgment. It was highly effective for research and summarization, structuring outlines, surfacing missed ideas from earlier drafts. That resonated with me because I often find in my own experience when I’m doing research on either blog posts or articles or reports or just research about something I’m interested in, it usually surfaces something AI wouldn’t have thought of, or I might have done, but it might have surfaced later after I’d written it, and it requires a rewrite or something like that. Structuring the outlines, too, is another thing. And this is definitely worth noting — we’ve discussed this before. Everything still required the humans to fact-check and validate everything the AI produces, because in Charlene’s words, AI has no built-in truth function. And I think that’s a worthwhile way of looking at it.
And the final point that I took from this: you can’t outsource originality, voice, or quality — i.e., the writing. They tried it. AI failed at core creative tasks. There are three of them that Josh points out in his article. Generating genuinely new ideas — this is not very good at this, because it’s trained on existing writing that humans have done over the years and the centuries even. It can’t create something new from that other than guesswork. It’s about the same as what we do, I think, except we’re likely to do the more informed approach. It can’t write in a compelling human voice. And it cannot edit to a high standard. They all described — Charlene and Katia and Josh, for that matter — AI writing as bland, repetitive, and jargon-heavy. And in fact, Charlene talks about how they could not stop jargon creep in anything that the AI produced. And she had this big thing about one draft where they used AI to review it — it changed every use of the word “use” to “utilize.” The AI changed it to that, full of that kind of jargon.
Shel: One of my biggest pet peeves, by the way, is “utilize.”
Neville: Right, totally. And the final quality, nuance, personality, and insight remained entirely human because the humans wrote it. So I take all of that, add it to what you’ve been talking about, and say, I guess I’d conclude from that: it doesn’t matter what your role is. These are the principles you need to pay attention to and approach your use of AI as an aid. And we’re not, you know, suddenly coming out with a revelation here. I see people saying this all over the place. AI is an aid to help you, in a sense, create extremely good content, either as a writer or something else that you might be doing, where this is contributing to that end. And it doesn’t matter what your role is, whether you’re no good at this or that — that reporter you talked about likes to report but not to write. I’m wondering how the hell he gets away with doing that. Reporters have to write, don’t they?
Shel: Well, I’m sure he just poured a lot of effort and energy into it when he would have rather been out in the field gathering information.
Neville: Got it, got it. So yeah, this is not too difficult a thing to kind of grasp, in my view, yet I’m constantly bemused by the fact that I see — and maybe LinkedIn’s not the best place to look for this stuff — but I see it all the time. You and I were talking about this before we started recording about people posting there about, you know, you should never use AI. Here’s a list of words I see, and if I see them in LinkedIn posts, I’m going to unfollow that person and call them out. I see this all the time. And I think your example you mentioned to me about the person who wrote a LinkedIn post saying that you should — it was like, you should never, ever — and there’s the list of things — use AI for. That’s insane. That’s insane.
Shel: Yeah, she said nobody wants to read emails written by AI. Nobody wants to read reports written by AI. And she just went down every form of writing you can think of. And I was thinking, really? Nobody? Nobody wants to read this? And I’ve got data that says people prefer emails written by AI when they’re written by people who are terrible writers and have a hard time expressing the main point they’re trying to get to. Their own writing — the AI has actually made the emails of these people better, and people would rather read those.
Neville: So did you use AI to research this?
Shel: To research, to find that data? Yeah, of course I did. It’s easier than using Google, but I also verified the source of that research.
Neville: Right, okay. No, no, no, hang on a second. The point of that though is it’s illustrative of something that I’m astonished when I hear people that have not heard of doing this before. “That’s a good idea,” which is: anything you’re working on, literally anything, and you either have your list of things you need to research, but something that occurs to you during your work — I wonder who said X, or I wonder how you do this — ask your AI to go research it. And it then becomes a natural part of your workflow. And that’s one of the things it’s very good at.
But we’ve got the example we talked about last October with Deloitte in Australia and Canada. You’ve got to check everything it creates, particularly if it’s a topic you really don’t know about yet. But even if you do know, you’ve still got to check it. That means when you tell it to go out and look for stuff, and you’ve already given it your preferences — like anything it finds, it’s going to come back with a link to the source as well — so you’ve got all that stuff, you’ve got to then go and check all those things too. So there are no easy shortcuts here to this use. But it still saves you a huge amount of time because you’re then spending time, in a sense, understanding the output that you’re going to use to create your final version of this.
That I see people often criticizing — “If you use AI, your brain gets kind of frozen and doesn’t learn stuff.” Yeah, that’s not, in my experience, the case, because you’re doing it differently is how I would see it. You are asking your assistant to go and find this and this and this, and they come back with this and this and this, and you then go and research it yourself to check up that it is this, this, and this and not that.
So it’s, I think, an interesting aspect to the broader debate on those who are anti and those who aren’t, where most of us are sort of somewhere in the middle there. But you need to totally understand the pros and the cons of this and indeed the limitations of AI, as well as the human limitations, and work out what works best for you.
The reality, though — I guess the bottom line in terms of how I see this — is that you cannot take the human being out of the picture. This tool is purely that: something to assist you that gives you what you need to create the final product, if you like. And that doesn’t matter your job role. That’s what it’s about.
Shel: Well, I would argue that if you are in a job where writing was not taught in school beyond what you learned in your basic English class or whatever language you were raised with, and you need to produce writing, and this tool is now there to help you do that — if you’re an engineer, for example, engineers are brilliant. Many of them are
Neville: Not good writers.
Shel: Terrible writers. And they have to produce something that’s going to be useful to the people that they’re distributing it to. And if AI is going to write a better draft than they could do on their own and produce better output that people can make better use of, then they should let AI write that stuff. In an engineer’s report, there is no need for lived human experience that we keep hearing about. Empathy does not have to come into these reports. They’re technical in nature. Let the AI write it for them. Absolutely edit it, review all the facts to make sure it’s right. Presumably it’s writing based on what you gave it in terms of the information that you have learned that you need to produce in this report. So less opportunity for hallucination when you’re telling it: only use this data that I have put into this ChatGPT project for the output. But you still have to review it very, very carefully. That’ll still save you time and grief if you’re not a writer and you need to produce this stuff. I feel really strongly: we have this great tool here that’s going to make the outputs better and make business better.
Neville: Yeah, I think I don’t disagree with you at all, but I think I’m not as optimistic about it as you are in the sense of this is going to work seamlessly if people do all the things you just said, because typically they’re not going to do that. I think the key — and I can see scenarios exactly as you’ve outlined, someone in a job that’s a valuable job and he or she does a great job but lacks the skills to write — then I would say that’s fine, get the AI to write. You need to be educated then on how to get the AI to do what you want. You then need to, without fail, verify and check every single thing that the AI has created. And I’m not sure that many of the folks that you might think of are truly geared up to do that kind of thing. So you might need to have colleagues assist you then. I mean, I guess the point is that…
Shel: Well, it’s…
Neville: This is going to be a debating point forever, I would imagine, until people stop talking about it. But you’re going to encounter — I can see it now — “But yeah, you’ve got to disclose the fact that you used AI.” No, you don’t. You get down to that rabbit hole argument about, do you do that when you use Grammarly? Do you do that with your spell checker? No, you don’t. So why would you say you’d have to do this? Because it’s such an emotive topic where logic is missing in many of the arguments. It’s all emotional.
That’s the minefield you have to walk. For much of the work that many people might do, they won’t use the AI to write it. They’ll use AI to assist them in creating it. And that could mean they do an outline, or it suggests the construct of a draft, or you draft it and it reviews it and makes suggestions on how to improve it.
I do that quite a bit with my AI assistants. And I don’t have a rigid format. Much depends on the topic and how I feel about it, basically. And often I’ll ask it a topic that is something I’ve been thinking about and say, is this worth writing about? If so, give me some suggestions on the angle I should approach it from. And that always sparks much more discussion and thought on what the content might be, including, “Now this is not worth writing about for me.”
So it’s a big topic. You had in your prep for this loads of links to articles all over the place about this. And I think it’s good to do that. But this is emotive. And it’s going to not be a simple thing to avoid criticisms.
Shel: Yeah, and I think it’s a governance issue inside organizations. I hear about the lack of AI training going on in many organizations or how superficial it is. I think for those people who have to write in their jobs, you want to do targeted training about how to use this to write. From the idea generation to the brainstorming to the back-and-forth discussions that you might have about approaches to take, or
Shel: using it to structure the document right down to writing it for that first draft, if you just could do better with that than you can on your own and you’re not a professional writer. All of that needs to be trained and it needs to be articulated in the governance policies in the organization around AI, and there need to be resources. And yeah, we need to have subject matter experts that people can call. This is on us right now as internal communicators who deal with writing in general to lead this conversation in the organization and make sure that these kinds of governance activities are implemented.
Neville: Work to do.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #507: Should Nobody Really Ever Write with AI? appeared first on FIR Podcast Network.
30 March 2026, 7:01 am - 1 hour 42 minutesFIR #506: Battle of the Bots!
In this monthly long-form episode for March, Neville and Shel tackle a trio of interconnected themes reshaping the communications profession in the age of AI. The conversation opens with Anthropic’s top lawyer declaring that AI will destroy the billable hour. That thread leads naturally into JP Morgan’s controversial use of digital monitoring to verify junior bankers’ working hours, where Shel and Neville question whether surveillance technology can substitute for genuine managerial trust and engagement.
The episode also examines Gartner’s widely circulated prediction that PR budgets will double by 2027 as AI search engines favor earned media. Shel delivers a detailed report on the escalating misinformation crisis, citing a 900% surge in global deepfake incidents and new research from the C2PA on content provenance standards. The episode closes with a discussion of Cloudflare CEO Matthew Prince’s prediction that bot traffic will exceed human traffic by 2027, and a sobering peer-reviewed study on how social bots hijack organizational messaging — research reported by Bob Pickard, who has experienced bot-driven attacks firsthand.
Dan York also contributes a tech report on the state of the Fediverse and Mastodon, as well as on AI developments for WordPress.
Links from this episode:
- AI will destroy the billable hour, says Anthropic’s top lawyer
- Gartner predicts PR budgets will increase 2x by 2027
- 5 takes on Gartner’s new optimism for PR and earned media in the age of AI
- PR is back, baby — Gartner is predicting… [LinkedIn post by Lindsay Bennett]
- The Gartner claim that public relations and earned media budgets will double by 2027
- JPMorgan starts programme to monitor junior banker hours [Financial Times]
- FT Exclusive: The US bank has started to… [Financial Times LinkedIn post]
- Senator Bernie Sanders Discusses the Impact of AI on Privacy and Democracy with Claude
- Let’s Talk Keyboard Jamming and Why It Might Suggest Bigger Problems at Work
- Telling Fact From Fiction With Online Misinformation
- Online bot traffic will exceed human traffic by 2027, Cloudflare CEO says
- Public Relations & Organizational Communication [LinkedIn post by Bob Pickard]
- Social Bots as Agenda-Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging
Links from Dan York’s Tech Report:
- Mastodon post by Eugen Rochko (@Gargron) — mastodon.social
- Mastodon — Decentralized social media
- How to Generate a WordPress Theme with Telex
- Telex — AI-Assisted Authoring Environment for WordPress
- WordPress.com now lets AI agents write and publish posts, and more
- Your AI agent can now create, edit, and manage content on WordPress.com
- Enable MCP tool access for AI agents
- WordPress.com MCP prompt examples
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everyone, and welcome to the Forum Immediate Release podcast, long form episode for March, 2026. I’m Neville Hobson.
Shel: And I’m Shel Holtz.
Neville: As ever, we have six great stories to discuss and share with you, and we hope you’ll gain insight and enjoyment from our discussion. Perhaps you’ll want to share a comment with us once you’ve had a listen. We’d like that.
Our topics this month range from AI in the end of the billable hour to Gartner’s predictions about PR budgets to monitoring work in the age of AI to newsrooms battling AI generated misinformation and more, including Dan York’s tech reports. Before we get into our discussion, let’s begin with a recap of the episodes we’ve published over the past month and some list of comments in the long form.
In episode 502 for February, published on the 23rd of that month, we explored how rapidly accelerating technology is reshaping the communication profession from autonomous agents with attitudes to the evolving ROI of podcasting. We led with a chilling milestone moment, an autonomous AI coding agent that publicly shamed a human developer after he rejected its code contribution.
A leader can build goodwill for days and lose it in seconds. In FIR 503 on the 2nd of March, we reported on the president of the IOC, that’s the International Olympic Committee, who had no answers to reporters’ questions and suggested on camera that someone on her communications team should be fired. We got comment on this, haven’t we, Shel?
Shel: Boy, do we have comments on this one. This attracted a good number of them, starting with Kevin Anselmo, who used to have a podcast on the FIR Podcast Network. It was on higher education communication. He says, having previously worked in communications for two different international sport federations, I found this story quite amusing. One of my first PR roles was working at the 2000 Sydney Olympic Games. I was working on the sport federation side, not the IOC.
Neville: Yep, you did.
Shel: But I know that working at such events is exhilarating and exhausting as you have to deal with a myriad of different issues. I can imagine that toward the end of the Olympics, the PR team fell short of delivering a robust brief. But nevertheless, in answer to your question, even if the PR people were abysmal, the fault is on Coventry for the way she handled the situation. A simple, we will have to look into this and get back to you response would have worked.
Instead, by handling it the way she did, she drew unnecessary attention to the questions she and the team weren’t prepared to answer, as you and Neville shared. I guess in the process of this mishap, I learned that Germany was in the running for the 2036 Olympics, which I wasn’t aware of. We also heard from Monique Zitnick, who said, really enjoyed your discussion on this. Certainly a puzzling situation that has surely ended in broken trust on both sides.
Shel: Mike Klein said, another ignominious IOC leader in the mold of Brundage and Samaranch. Neville, you replied. You said that’s an interesting comparison. Mike, Avery Brundage and Juan Antonio Samaranch both left very complicated legacies, particularly around politics and governance in the Olympic movement. What struck me about this episode wasn’t so much ideology or policy. It was leadership under pressure.
Coventry had actually received a fair amount of praise for how she handled some difficult moments during the games, which makes the press conference moment even more interesting from a communication perspective. It’s a reminder that reputation capital can be fragile. A single public moment can reshape the narrative very quickly. Mike replied, yes, leadership under pressure, but also the kind of people the IOC has chosen for leadership over the years.
Coventry has a complicated history over her involvement with her native Zimbabwe’s recent regimes as well. Sylvia Camby said, Neville, watching Coventry’s press conference took me back to the time I spent doing comms for an international association. It reminded me of how inward-looking organizations like the IOC can be. So totally focused on their internal member politics with leaders too lazy or too overconfident to bother to educate themselves about current affairs.
Also, they often have a distorted idea of what the press is interested in. They often think they can dictate their agenda. As you and Shel mentioned on the podcast, the questions were entirely predictable. You replied, Neville, that’s a really insightful observation, Sylvia. Organizations like the IOC can become quite inward facing, particularly when so much of their energy is spent navigating internal governance and member politics. That can create a kind of blind spot about how issues look from the outside.
Sylvia said, and I was thinking, I’m proud of Germany for being so sensitive about the significance of that date and for opposing the 2036 bid. They are much better at reading the spirit of the time than Coventry. As an aside, my father’s cousin competed in the 1936 Olympics in Berlin as a gymnast. She passed away last year at the age of 104.
She often spoke to me of the atmosphere surrounding the Olympics at the time, a heaviness and a sense of unspeakable doom. So yes, 2036 is a date that Berlin should definitely avoid. And you replied to that, Neville. People can go find that one in the comments.
Neville: That’s a good one. There are some great points of view, perspectives there. So thanks to everyone who commented. Are companies using AI as a convenient explanation for layoffs? That was a question we asked in FIR 504 on the 10th of March when we discussed AI washing, when organizations blame workforce cuts on AI, even when the reality is more complicated. It’s a difficult ethical space for communicators. And we have comment on this too, don’t we?
Shel: Three short ones. First from Monique, who commented that she was looking forward to listening to the episode because she’s been having a lot of conversations on this over the last month. Jacqueline Trzezinski said, I’m glad you’re delving into this. The same thought came to my mind when I saw the Block layoff announcement, especially as it was held up by some on LinkedIn as an example of how valuable transparency is during layoffs.
And Jesper Anderson said, I find it fascinating how quickly the world turns upside down. 18 to 24 months ago, companies were accused of letting people go because of AI and not admitting that this was the true reason.
Neville: Good perspective, Jesper, that one. Is social media still social? In FIR 505 on the 17th of March, we explored Hootsuite’s 2026 Social Media Trends Report, addressing social search, AI versus authenticity and more. Plus a darker question: what if AI starts to dominate the conversation? And we have comment, don’t we?
Shel: Yes, from Zara Ramoutoho Akbar, and I sure hope I pronounced that right, apologies if I didn’t. She said, yes, it feels like socials are shifting from a channel to a trust system. And in that world, I would say that the employee and peer voices matter more than brand output. Are you seeing organizations lean into that yet or still treating social as a broadcast channel? And since Zara asked the question, Neville, what do you think? Are you seeing this change?
Neville: No, I’m not, to be honest, but maybe it’s taking its time. There is something afoot without any doubt. And I think it’s something that we should expect. And that darker question is a valid one to put forward, let’s say. And we’ll keep our eyes and ears open, I think.
Shel: Yeah, I haven’t seen it much either, but I do think that there are organizations that are talking about it. So as you say, we may see this start to change in the months ahead. We have one more comment from Dolores Holtz. No relation. I for one certainly rely on people whom I trust more than any name or brand.
Neville: Yeah, I agree. Fair enough.
Shel: I think that covers our previous episodes up to this one.
Neville: Yeah, good, good comments all over from all those episodes. And thanks everyone for listening and adding your comments to that conversation. It’s really terrific.
Shel: Yeah, keep those coming and ask us questions because that was great from Zara. Also up on the FIR Podcast Network right now is the latest Circle of Fellows. It was a good conversation on the communication issues and challenges in this age of grievance and isolation into basically tribes these days.
Shel: This was Priya Bates, Alice Brink, Jane Mitchell, and Jennifer Wah were the panelists on this Circle of Fellows. As I say, it was really a terrific conversation. The next one is coming up on March 26th, Thursday at noon Eastern time. It’s on crisis communications and especially this idea of the polycrisis, which we heard about from our friend, Philippe Borremans.
The panelists for that Circle of Fellows will be Ned Lundquist, Robin McCaslin, George McGrath, and Carolyn Sapriel. Should be a good crisis-focused conversation. And of course, if you can’t make it at noon on Thursday, it will be available as a Circle of Fellows podcast and the video will be up on the FIR Podcast Network.
Neville: While we’re talking about IABC, let me briefly mention that Sylvia Camby and I hosted a webinar for IABC as part of IABC Ethics Month in February about ethics and AI. We’re actually going to…
Shel: I attended and it was terrific. I was there. It was a great webinar.
Neville: Well, thanks, Shel. That’s great. And we’ve actually had a nice review from someone, which was very pleasing. We’re going to repeat this, specifically for IABC members in the Asia-Pacific region. So if you’re in Australia, India, China, Japan, and maybe right out into the Pacific area, this one’s for you. It’s members only.
The event is AI Ethics and the Responsibility of Communicators. It explores the challenges and responsibilities communicators face when introducing AI, including transparency and trust, stakeholder accountability, and human oversight. It’s on Wednesday, the 15th of April at 6 PM Sydney time. That’s AEST, as I discovered, Australian Eastern Standard Time. You’re no longer on daylight savings in Australia, whereas we are by the time we do this. So 6 PM in Sydney, or 8 AM UTC. That’s Coordinated Universal Time or GMT if you’re used to that one. For me, I’m in the UK, so it translates into 9 AM UK time. But 6 PM in Sydney and that sort of time zone area is the important bit. So we look forward to seeing you there.
Shel: 1 AM Pacific time, so I won’t be participating in this one.
Neville: If you’re up, you could join. OK. So IABC will be letting members know about where to go and register, et cetera, I’m sure in the coming days. So just mark your diary in the meantime. Wednesday, 15th of April, 6 PM Sydney time. And let’s get on with things. But first, there’s this.
Shel: I won’t be.
Neville: Right, let’s start with a statement that will make a lot of people in professional services sit up a bit. Anthropic’s top lawyer Jeff Blick says AI is going to destroy the billable hour. That’s of interest to you if you’re a consultant in particular. Blick argues that AI is removing the need for what he calls tedious but lucrative work, the kind of work that firms have historically billed by the hour. And that matters because the billable hour isn’t just a pricing model.
It’s the foundation of how entire professions have operated for decades. But here’s the tension he highlights. Clients want problems solved quickly and efficiently, while the billable hour rewards the opposite: more time, more revenue. AI sharpens that contradiction because now tasks that once took days or weeks can be done in minutes. And that raises a very simple, very uncomfortable question for clients: if the work takes less time, why am I still paying for all those hours?
It’s something I’ve been thinking about quite a lot myself recently. I wrote about this in Strategic Magazine a few months ago, where I argued that AI isn’t killing consultants, but it’s killing the logic of the billable hour. Because the model has always had flaws: it rewards activity over impact. It prices effort rather than outcomes. And as soon as technology compresses effort, the model starts to look outdated. What’s changing now is not just efficiency, it’s expectations.
Clients aren’t necessarily looking to pay less. They’re looking for clarity, predictability, and above all, value that reflects results, not time spent. So we’re starting to see a shift from billing hours to pricing outcomes, from selling labor to selling judgment. And that sounds straightforward, but it opens up some deeper questions. If AI removes the entry-level repetitive work, how do people develop the judgment that clients are now paying for?
If you move away from time-based billing, how do you actually define and defend value? And perhaps most importantly, are firms really ready to let go of a model that has defined their economics for generations? I think what this really points to is a shift in what clients are buying: not time, but judgment; not effort, but outcomes. And the firms that recognize that early will have a very different advantage from those that don’t.
Shel: Well, if AI drives the end of the billable hour, all I will be able to say is it’s about time and thank God something did it. I have never been a fan of billable hours in communication consulting. I can see it in other lines of work. I mean, plumbers bill by the hour, electricians, people who work with their hands tend to bill by the hour, although interestingly, auto mechanics often do not. It’s the labor required to do this particular thing is worth this amount of money. And then there are the parts that you have to pay for.
But the question is, if the model of billable hours goes away in the public relations and communication industry, what do we replace it with? And I know we have talked about this in the past, but it has been a while.
But I remember when I worked — we have both operated in the billable hour environment. And when I was at Mercer, Mark Schuman was also at Mercer. I think he was in their Houston office and he came up and met with the comms consultants in Los Angeles. And he was talking about the value add. And I objected to this. I said, I have a billable hour based on my value and what it takes to cover overhead and make a profit. I think my billable hour when I left Alexander and Alexander was something like $385 an hour. And that should cover everything. Why are we adding something and just calling it value add?
And what Mark said was, if I have an idea in the shower and it took me 30 seconds for that idea to spark and yet it informs the entire engagement with the client and solves a problem and is based on my decades of experience and everything that I have learned — is that really worth only the 15 cents that that 30 seconds would be valued at under the billable rate? That’s ridiculous. The more I thought about it, the more I thought he’s right. That is ridiculous. So why aren’t we billing based on the value of the project?
Now, you can say here’s how many hours it’s going to take to complete that project and use that as a basis to come up with a price to give a client. Or you can look at other things. I think I mentioned on a show several years ago that Craig Jolly and I proposed a communications program for Coca-Cola for a department that was eliminated before we could come to a final agreement because they had actually agreed to this.
And what we were going to be paid for our effort was absolutely nothing. We were not going to bill them for hours. We were not going to bill them for the value of the project, but they were going to track the outcomes of the work that we did. And they were going to pay us 5% of the savings that accrued as a result of what we did and 5% of the profits that accrued based on what we did. And we had a formula for that. We would have made a fortune over, I think, the three years that we were going to get compensated after this project was complete.
There are other models out there that people can consider, but you’re right. I’m wondering when the clients are going to start saying, this is what I paid last time. Haven’t you started using AI? Why isn’t the drudge work that is part of this project taking less time and costing me less? I think we’re going to hear that from clients. So you better start thinking about the new models.
Neville: Yeah, it’s a sea change. It’s quite a significant change in structure to move from the billable hour. And one reason I believe nothing’s happened is there is definitely no groundswell of desire to change this from the people in organizations who would likely suffer most if it did change, or those who don’t.
And there are lots. I’m not picking out anyone in particular, but there are lots of people who just don’t like change. And we’ve been doing this for years. It works. Our whole business is based on this. And it probably is going to need, going to take a major client of a major consulting firm to say, hang on a second. We have a question for you about how you’re charging us. I’ve seen lots of chats about this, Shel, and I’m sure you have. And yet nothing’s happened.
So I wrote a lengthy analysis on my own blog not long ago, and that hardly got any attention at all. The story in Strategic I wrote was quite heavily researched, but I’ve not really seen much, any real traction on that other than some folks who said to me, hey, nice article you wrote in Strategic. I’d rather hear them say, I didn’t like it, here’s why, or I got a better idea, or whatever. Get a conversation going about it.
One thing I think should stimulate a discussion is, and this could be something we’ve got to force on people: look at it from the point of view of the client, not the consultant. And by the way, all these other examples you gave, like plumbers and all that, are absolutely right. So this discussion is specifically about professional services and consulting, not auto mechanics and plumbers and stuff like that.
So think about this: clients aren’t buying less, they’re buying differently. That’s the thing. I’ve had conversations with people — I have to admit, I struggled, truly seriously struggled to get the conversation actually with some energy to continue on why we should make this kind of change. So clients aren’t buying less, they’re buying differently. And one thing I wrote in the Strategic piece was talking about what their expectations are from the people that advise them, the consultants they work with. Today, they expect advisors who: one, use AI to scan signals and surface insights; bring sharper data-informed recommendations; and help avoid ethical, legal, and reputation missteps. Three major things they expect from people. AI has a role in all of them.
I think we need to move away and we can take the initiative on this to change the conversation with clients to this as opposed to, well, draft that report for the clients and AI can do all the research and so forth. When clients ask, why am I paying for all this time? You could pitch that to them in the sense that this is the value of the briefing we give to the AI. I think that is a demolishable argument over time. Clients are like you and me, they’re people, they’re not stupid. They’re looking at this themselves, many of them.
That said, there are many clients, particularly the more you get to the enterprise level and those kind of consulting firms at that level, who really don’t have much desire to rock the boat at all with all of this. It’s very entrenched, it’s ingrained. Everyone’s making money and it’s all wonderful and business gets done. And it’s going to need something to make a major shift here.
So I think we should take the initiative as communicators to do this. And it could be someone in a consulting firm — like you, I worked for Mercer and I remember back in the early ’90s, not discussions about changing the business model, but the value add. So maybe this is a Mercer thing at that time, perhaps. We need to have that conversation now. And we need someone at a senior level with an influential voice to raise this internally in their organization and run some internal webinars or seminars or get-togethers to talk about why we need to change the business model and why the billable hour has to end as the basis for business. But it’s a big task, I would say.
Shel: One of the truths about the public relations industry is that it takes pain for the industry to change. I mean, we’ve seen this. We’ve been doing this show for 21 years and we’ve seen it with a number of major technologies that have come along that the PR industry has been very, very, very slow to adopt. And what ultimately got them to adopt the web and social media was seeing work taken away from them by boutiques who were offering those services. And as soon as they saw money left on the table, they said, we’d better figure this out because this is something that we should be doing. They figured it out and now they’re using it regularly.
You’re absolutely right that we in the industry have experience and insights that allow us to do things like create the appropriate prompt to get the right result for a public relations issue or campaign or what have you. And it goes far beyond the prompt. It goes into creating documents that become foundational to a project within one of the LLMs. It even gets into agents now. What if we set up an agent on behalf of a client that is out there looking for competitive information on a regular basis? And it took, let’s say, 15 hours to create this agent so that it was producing the kind of daily or hourly reports that we’re looking for. And those become a big part of the project. It’s operating while we sleep. We can’t charge for that. Certainly it’s not going to be on an hourly basis.
So a formula has to emerge for these types of things that allows agencies to be compensated in a way that keeps the lights on, provides the salaries to the consultants who work there, and earns a reasonable profit without having to bill hours because it just makes less and less sense. And as I say, I didn’t think it made sense back in the ’80s when I was working for Mercer, my first consulting gig.
You remember maintaining your time sheet in 10-minute increments? Oh my God. Who’s going to pay me for that? Who do I bill for the time that I spend maintaining a time sheet in 10-minute increments? I mean, come on.
Neville: Don’t remind me, please. I tried to get away with entering time in the timesheet for the time I had to spend on doing the timesheet. They didn’t let me get away with that. No.
Shel: They didn’t buy that. My brother’s an attorney, and when he was working for a law firm — he’s corporate side now — but he remembered if he took a pencil out of the supply cabinet, he had to bill that to a client. So I mean, the time that he was spending billing things to clients was time that he wasn’t spending on client work. There are countless reasons why the billable hour needs to die. I don’t mind the consultant having a billable hour rate as a base for calculating something, but it shouldn’t be the be-all and end-all of what the client is billed. There needs to be a formula where you say this is what the project is going to cost. And if the project moves out of the scope that you agreed to, then you go back to the client and say, we’re outside the scope. We’re going to have to charge more for that. Here’s what we’re going to charge. You okay with that before we start moving on this stuff that you’ve requested that is out of scope?
Neville: Yeah, no, we need to get some movement going on this topic, I think. And maybe that’s something — thinking about IABC, you know, some kind of talk on this topic needs to happen.
Shel: Yeah. Or, you know how Ann Handley sold the T-shirt that said Justice for the Em Dash? I bought one. We need T-shirts that say Kill the Billable Hour with the FIR logo on it. Would anybody buy that? Let us know. We’ll pursue it. I’ll find out where Ann had her shirts made.
Neville: Yeah, I like that idea. I like it. Excellent.
Shel: If you work in public relations, you’ve probably seen the prediction that’s making the rounds right now. It sounds too good to be true. Gartner, the analyst firm whose pronouncements tend to get circulated in agency pitch decks for years, Gartner has declared that by next year, 2027, the mass adoption of artificial intelligence and large language models as a replacement for traditional search will drive a doubling of PR and earned media budgets.
Now, what would drive this surge in PR spending, you ask? Well, AI answer engines overwhelmingly favor non-paid sources. More than 95% of links referenced in AI-generated answers come from earned, shared, and organic owned content, with 27% originating directly from earned media. So if AI is where people increasingly go for information — and by the way, the data on that is striking; ChatGPT saw traffic surge 608% year over year between the first half of 2024 and the first half of 2025, while traditional search giants Google and Bing both slipped — well, then earned media becomes the engine of discoverability. And that, the argument goes, means organizations will pour money into PR to stay visible.
Now, I want to be honest about the source here, because Stuart Bruce, someone whose thinking you and I have always admired and respected, Neville — Stuart has pointed out that this prediction originated in a blog post published by Gartner as part of a lead generation campaign promoting a webinar for chief communication officers, and that while it carries the authority of the Gartner brand, it lacks the evidence normally associated with their research publications.
Frank Strong over at the Sword and the Script notes similarly that the prediction feels rushed. 2027 is barely more than eight months away and the path from “AI favors earned media” to “budgets actually double” is pretty far from certain. But I’m cautiously optimistic because the underlying logic is sound.
If AI systems favor credible third-party sources and PR is the function best equipped to generate that kind of coverage, well then yeah, our work becomes more strategically important. But a Gartner webinar promo is not a Gartner research report, and we should resist the temptation to tout this prediction as if it were settled fact.
Here’s what I actually want to talk about though. Let’s say the prediction is right. Let’s say the prediction is half right. Let’s just say budgets grow substantially. What happens to that money? Because there’s a pattern in this industry that I think we need to name directly. When good fortune arrives — a new platform, a new capability, a shift in the media landscape — agencies have historically been better at capturing the upside than at reinvesting in the profession. More revenue has meant more of the same: more accounts, more billable hours, more senior hires, not more rethinking.
And right now, in the age of AI, there are two investments that I think agencies have an obligation to make if this windfall arrives. The first is genuinely rethinking the agency model in light of AI — not just adding a chatbot to the workflow, but asking the hard questions about what services still require human judgment, where AI can amplify capacity, and how to build new offerings around answer engine optimization. And by the way, a new billing model.
Stuart Bruce notes that Gartner explicitly rejects the efforts of SEO and marketing companies to pivot into this space, recognizing that answer engine optimization requires communication-specific skills to balance stakeholder trust and platform requirements. That’s an opening for PR, but only if agencies actually build those capabilities rather than outsourcing them to MarTech vendors.
The second investment, and this one matters a lot to me, is in rebuilding entry-level pathways into the profession. AI has already been eroding the grunt work that used to serve as the training ground for new communicators. As one analysis put it, the traditional deal of entry-level work — trading rote labor for mentorship — that’s dying. The learning curve is being automated, leaving early-career professionals stranded between AI agents and senior incumbents.
If PR budgets double, agencies will have the resources to do something about this. They could create structured apprenticeship programs. They could invest in training that teaches new communicators not just to use AI tools, but to supervise and interrogate them. They could build the next generation of practitioners rather than simply eliminating the entry points.
What I fear, and what I think is entirely possible, is that agencies will look at this budget doubling as a margin opportunity rather than a reinvestment opportunity. More revenue, leaner teams, higher profits. And five years from now, we’ll be asking where the next generation of PR professionals are going to come from.
So yeah, the Gartner prediction may well be right. AI does appear to favor the kind of credible third-party earned coverage that PR generates. And that’s genuinely good news for the profession. But good news is only useful if you do something smart with it. Neville, you’ve been watching the agency landscape in the UK and Europe for a long time. When you see a prediction like this, do you believe it? And what’s your read on whether the industry will rise to the moment or just cash the check?
Neville: I must admit, I did say when I saw the article, I don’t believe it. British TV viewers might recognize that phrase from a comedy show 20 years ago. I did follow a lot of what people were saying, and all I saw was bubble, bubble, bubble, hype. I didn’t see anything. What I saw was missing, meaning this was a marketing claim, as you mentioned, and Stuart Bruce wrote about that, and others have too, just pointing out this was a blog post from Gartner. There’s no data to back up any of it. There’s nothing cited. There’s nothing you could trust to prove or to give you confidence in repeating it. Yet that’s what everyone has been doing, repeating this as fact.
The particular phrase that was repeated by Gartner and then mass repeated: by 2027, mass adoption of public LLMs as a replacement for traditional search will drive a 2x increase in PR and earned media budgets. But there’s no evidence behind that. Yet what we saw was mass repetition all over, LinkedIn in particular.
I did read a worth-reading article by Stephen Waddington published on the 16th of March on his blog about this topic. And he’s critical. And I think his starting line is “when industry optimism outruns the evidence,” and therein is where we’re at with this. I’ve seen sensible voices — you, Stuart, another one — who are saying that if this is true, then this is what it could mean, this is what could happen. But it’s like a lot of things we see: the maybe, perhaps, could, etc. is kind of brushed under the carpet, where suddenly before you know it, this is what’s going to be happening.
So I’ve not seen a huge amount of conversation about this, to be honest, except when this first appeared. That said, today I saw two posts on LinkedIn from people repeating this who obviously just came across the Gartner piece and they’ve reposted it.
Shel: The long tail lives.
Neville: Exactly. So Stephen goes into — he makes a point in his post about GEO, and I think that’s actually contextually good. He’s saying Gartner’s observation may ultimately prove correct. But the path from the insight to a doubling of budgets is far from certain. He says, GEO remains highly contested. I’ve seen others saying that too. The mechanics of how AI models select, weight, and attribute sources are still evolving. This is an era where budgets are being directed to support discovery work.
So what needs to happen instead, he says, is a call to action, I suppose, to communicators. When you see this claim being made, please challenge the argument. And if we aren’t set to see a boom in public relations work, some of that investment will need to be diverted to ensure the sustainability of earned media. And that, to me, is a very sensible point to make.
All of this is probably and in fact certainly is why I didn’t post about this on my blog. When I saw it, I was attracted to it thinking, this could be an interesting topic to stimulate some attention. Then I read it and started seeing others like Stuart saying, wait a minute. So I thought, no, I’m not going to join a hype bandwagon here without some further research. Therefore, it didn’t appear compelling enough to me to spend the time on it. Let’s see what emerges further from this, if anything. But like you said, Shel, if this turns out to be true, then happy days.
Shel: Yeah, I doubt it myself. I think what we’re going to see is an incremental increase in PR spending as a result of this. And that’s going to be because we’re not going to see some mass revelation at the same time among all industry that, my God, we need to invest more in earned media so that we’re visible in search results that are now happening on LLMs instead of search engines. This is going to be gradual.
One company is going to pick up on it, then another. But what I have seen ongoing, regularly, are new reports, new studies, new research coming out. It all validates that LLMs are in fact generating their search results based largely on earned media. And I think as people wake up to that and realize that if we want to be present in those results — it’s like showing up on the first page of Google search results — we want to be in the answer when somebody asks a question where our expertise, our thought leadership is relevant. Then you need to bolster your earned media.
One of the things that worries me though about this bolstering of earned media is how many more press release pitches am I going to get? How many more press releases that have nothing to do with me or what I do are going to show up in my inbox? You’re going to see reporters pitched way more than they’re being pitched now. And there may be some blowback from this as a result of that. It’s like, hey, PR industry, back off — too much. So there’s also that to consider.
Neville: Yeah, I agree. So don’t believe everything you read online is a simple thing here, and take time to pay close attention to what people are saying about this before you repeat anything. Just be clear in your mind.
Shel: Yeah, I was also going to say that I think owned media, the stuff that you produce on your own website — I think a renewed emphasis on that. So you’re producing really interesting stuff that people start looking at. That counts, too. That’s one of the categories of media that was included in this research. So you don’t have to rely on earned media all that much if you can do a great job of producing that content.
Neville: Good tip. OK, so earlier we talked about how work is priced. That was our piece about the billable hour. Now let’s consider how work is measured, because there’s another story that feels connected but from a different angle. The Financial Times reported that JP Morgan has started using technology to check whether the hours junior bankers say they work actually match their digital activity — things like keystrokes, meetings, and video calls. The bank says this is about well-being, about awareness, not enforcement, about making sure people aren’t overworked. And on the surface, that sounds reasonable.
But when you look a bit closer, it raises some uncomfortable questions. What’s really happening here is a shift from reported work to observed work. Not what you say you did, but what the system can verify. And that’s where the reaction gets interesting.
If you look at the comments on the FT’s post about this, there’s a very clear pattern. Some people see this as logical, almost inevitable. In a data-driven industry, of course you measure activity more precisely. But a lot of the reaction is skeptical, even uneasy. You see comments like, “this really screams we trust our employees.” “This is a classic case of measuring what’s easy instead of what matters.” “Big Brother is watching you.”
And then there’s a more nuanced point that comes up repeatedly. Does this actually improve anything, or does it just change behavior? Because if people know they’re being measured on activity, they optimize for activity. More keystrokes, more visible presence, more signals that look like work — but not necessarily better outcomes.
And that connects directly to the earlier discussion about billing. If AI is automating more of the actual work — the analysis, the modeling, the drafting — then what exactly are we measuring here? Time, activity, presence, or value?
There’s also a deeper cultural question. Investment banking has long had a reputation for extreme hours. JP Morgan has already tried to address that, capping weeks at 80 hours, for example. 80-hour weeks. The days of 40-hour weeks are a distant memory, obviously. But if people were underreporting hours to stay on deals, then the issue isn’t just measurement — it’s incentives, it’s culture. Technology can surface that, but it doesn’t resolve it.
So this opens up some bigger questions. Are we moving towards a world where all knowledge work is continuously monitored and verified? Does that improve trust or undermine it? And if both pricing and measurement are shifting at the same time, what does a fair day’s work even mean anymore?
Shel: Absolutely. One of the things we keep hearing about AI is organizations are going to have to rethink things like workflows. And we’re talking about organizations that are not going to look at all in five years the way they do today because of AI. Are people thinking that it’s going to take 40 hours for somebody to do today what it took them to do before if all of that grunt work is being taken over by AI?
On the other hand, I have seen that AI has increased the number of hours people are spending on their jobs. There’s some very recently released data on that, that they are more stressed now with AI in the picture. And if you’re putting in more hours, is this really an issue?
I’m also always struck by, as you mentioned in the report, the lack of trust, the signal of the lack of trust that this sends. I’ve always felt that the availability of these tools that allow this kind of monitoring raises the question of, you know, just because you can, should you? And yeah, I don’t think that you should. I think there are better ways to determine whether your people are working, and looking at their outputs is the best of those. Have they delivered what you expected them to deliver?
Because when you destroy the trust that you might have had, or perhaps you never had trust in your organization in the first place, if you have new hires who come in and find that they are being monitored in this way, they’re just inclined to find ways to cheat. I saved an article in my link blog not too long ago from the HR Digest about key jamming.
The point on this was that if you have employees who are doing this, you have a bigger issue. But if you haven’t heard of key jamming, this is easily available products that remote workers use by putting them on their keyboards and it continually presses the key. So it looks to the software that’s monitoring like that keyboard is active, that employee is working these hours. They could be off doing whatever they want.
I imagine that there are some keystroke monitoring software that have been updated to address this and want to make sure that they’re typing real words or real numbers and not just repetitively striking the same key. But then employees will figure out the next thing, or the companies that sell these products will figure out the next thing to make it appear that the employee is working.
Better to build trust so that the employees will want to produce great work for the organization that they love working for than to destroy trust and implement these kinds of monitoring tools.
Neville: So it’s interesting. JP Morgan is quite resolute in their defense of this, because as they say, they’re doing this to help junior employees not overwork. There was a case here where an intern at the Bank of America died in 2013, which the coroner said was linked to long working hours. And the anecdotal stuff has emerged constantly since then on people who are totally wrecked emotionally because of the hours they’ve got to work.
To be fair to JP Morgan, they’ve responded to that at scale in the organization. The trouble is that nearly every comment I see that has commented on this is extremely skeptical about their true motive. So they’ve got a credibility problem to explain this well. They talk about this is about awareness, not enforcement, they say in their prepared statement. It’s designed to support transparency, well-being, and encourage open conversations about workload. They’re going to roll it out much more widely across their organization.
The estimate is based on employees’ weekly digital footprint, including video calls, desktop keystrokes, and scheduled meetings. So people being people, and the thrust of part of the article is what some of these junior employees are doing to kind of be counted and get the checkbox that you’re doing okay to enable them to spend time on the deals that they’re trying to close. Whereas if they did this to the letter and reduced the hours, they wouldn’t be able to close the deal. So I get that. So they’ll find ways to work around this.
And I think, is this inevitably what we could expect to see in every organization? Or surely the organization should approach this in a way that presents something to the employees that doesn’t encourage workarounds to get around these kinds of things. I don’t know. My sense is that we’re going to see a huge amount more of this kind of thing in service industry firms in particular, starting with banks, I suspect.
Shel: I hope not. I mean, let’s take them at their word. Let’s say that this is their solution of having Big Brother looking over employees’ shoulders for the employees’ benefit. Like I said, let’s take them at their word. They don’t want employees overworking because they don’t want them dropping dead at their desks. Great. That’s a great thing.
You do that by having well-trained managers who understand that their role is to set expectations and to display the kind of caring for the members of their teams that leads them to make sure that they’re not overworking. Where I work, we are working really hard in communications, in HR, and at the executive levels to develop this culture of managing where managers are checking in on employees to make sure they’re okay. We’re training managers on watching for signs of mental wellness distress among employees and then reaching out to them to say, hey, let’s take care of this, right?
It sounds to me like JP Morgan would rather implement a Big Brother program than to have engaging managers, one of the pillars of employee engagement, I might add. Why do people leave organizations? 50%, according to some research, leave because of their boss. And you know, if you have this churn among your junior people, maybe that’s because you’re doing a piss-poor job of training your managers to be really good managers. And if you did that, you wouldn’t need to erode the trust of your employee base by implementing Big Brother systems.
Neville: That makes total sense. I agree with you. But I’m wondering, maybe there’s something structurally amiss here. So for instance, the FT says in 2024, JP Morgan appointed a senior banker to oversee the well-being of junior staff. JP Morgan has since curtailed weekend work and also capped the working week for younger employees at 80 hours, typically based on self-reported numbers. That’s key, that last bit.
This process has proved imperfect as some junior bankers misreport the hours they work. One issue is they declare fewer hours than they have actually spent to avoid being pulled from existing deals or to ensure they can still be added to new ones. So I would say, if we kind of know this kind of behavior is going on, what are we going to do to address it and try and bring them around to our thinking? But that requires structural change in the organization as to how you do all this.
Shel: I have an answer. If AI is saving you money, use that money to hire more junior people so that nobody has to put in that kind of time. So staffing should increase as a result of the use of AI, not decrease, says I.
Neville: Are you listening, JP Morgan? Well, yeah, no, that’s a fair comment. I think just reading a bit more about the FT piece, it focuses on the tech workplace surveillance technologies. So not necessarily AI doing this, although it must be in there somewhere.
Shel: No, no, I understand. But if we’re using AI in the organization and it’s lowering costs because the rote work is being done by the AI, those savings could go to the additional staff. So nobody has to put in 80 hours.
Neville: Yeah. Well, I think it’s a problem across the sector because the FT quotes Goldman Sachs, for instance: junior bankers on occasion have been pulled aside and told to rest when its internal electronic monitoring was triggered. Get that. That’s how they’re watching all the time.
I think the comment someone made on the FT’s piece about, you know, we’re going to see more of this — I think we will. It is clearly not perfect. I’m reminded a little of some of the stuff I paid a lot of attention to a couple of years ago about surveillance in China and the surveillance society in China, where you are monitored constantly all the time by the state. And it doesn’t necessarily mean central government, but the local way you live — the town, the city — monitors everything you do: what you spend your money on, what time you get up, what time you get on the train to go to work, how you clock in, you swipe your card — all that.
That’s something as part of their society and structure. We are probably heading that way, I would argue, in Western countries, notably in Europe, some European countries. I don’t know about the States, Shel, to be honest. I don’t really know whether this is likely to be kind of prevalent anytime soon. I wouldn’t be surprised if it is, particularly if it’s going to be done covertly as opposed to openly and transparently, which I think is likely in America.
Shel: Well, mass surveillance has definitely been in the news in the US lately with Anthropic pushing back on the Pentagon’s insistence that they be able to use Claude for that.
Neville: Yeah, I mean, we’ve got experiments going on here which make the headlines now and again, although no one seems to be unduly concerned, which is the police in some jurisdictions are trialing more facial recognition technology that is now far superior to what’s been done before, that scans people as a matter of course in any public place. That, I would say, is an inevitability. We’re going to see that.
So what does that mean for organizations? I mean, that’s a broad avenue to go down, the discussion on that wide topic. But in an organization, it surely does become understandable, if not acceptable, that when you show up at the office to work — and by the way, that’s still a thing for many organizations, even though I’m now seeing in all the newspapers here that because of the war in Iran and the price of oil shooting up and all this stuff, there’s now talk about one way you can help to reduce energy usage is work from home and drive less and drive slower.
So that kind of talk is now starting to permeate public discourse. So I wonder what difference that will make to any of this, because if we’re to see more and more people want to work at home, that’s reversing. Are we going to see a backlash from employers who demand people come to the office? I mean, these are just questions. I don’t have answers for those, but it’s part of the picture. We are facing this kind of change that has good points, I can see quite clearly, but it’s alarming the state we’re at with all of this.
Shel: Yeah, just for a point of interest, yesterday I watched a video on YouTube. It was Senator Bernie Sanders talking to Claude. This is on YouTube. I’ll share the link in the show notes. He’s asking Claude questions about what AI can do in terms of this kind of surveillance, its monitoring of people. And Claude is very, very candid in its answers to Senator Sanders. It’s about 11 minutes. I think it’s really worth watching because it surfaces a lot of these issues, and as a society, I think we have to decide whether this is something we want in the workplace or in general.
Neville: I agree. That’s interesting.
Shel: Well, thank you, Dan. Great report. I have to admit that I have been neglecting my Mastodon instance. It’s called Mastocomm, C-O-M-M, for communications. I set it up when I figured that it was an easy thing to do and a great way to learn about how to establish an instance in the Fediverse. And I haven’t been taking care of it lately. And Dan, your report has inspired me to go back. I’ve been away so long, it wanted me to log in.
But it’s still there. It’s still up and running, which means I still have money coming out of my checking account every month to pay the fee to the service I use to host it. So as long as I’m spending the money, I might as well manage that. So thanks for the reminder, Dan.
Neville: Yeah, good report on that. I’ve not listened to your audio yet. But thinking about Mastodon, I don’t go directly to Mastodon. I haven’t been there this year. What I do is every time I post on Threads, it posts to the Fediverse. And so I do it that way. It’s cheating a bit because I’m not actually engaging with anyone there at all. But I get quite a steady stream of engagement back, people who like and so forth. And I do occasionally do the same myself via Threads. So it’s a lazy approach to doing it. But I’m okay with that because I’m present via Threads and that works well. And it’s a useful way of keeping in touch. If Threads is more likely to be your primary engagement channel rather than Mastodon, that’ll work quite well.
Shel: If anybody’s interested in joining the Fediverse and being part of a Mastodon instance that is focused on communication, join me: mastocomm.org. I’ll look for you there.
Shel: A professor at Syracuse University’s Newhouse School recently made a point that deserves to be heard beyond the J-school world. Jason Davis, who specializes in detecting disinformation, said the challenge today isn’t really about spotting fakes anymore. The AI tools are so good now that there just isn’t much that we can catch. To break the misinformation amplification cycle, people need to apply critical thinking before they decide to pass something on.
Now that connects to something I’ve been watching closely, because the misinformation problem has moved well beyond being a journalism problem. It’s a business problem now, and that means it’s a communication problem. The scale is pretty significant. Deepfake incidents tracked globally surged from about 500,000 cases in 2023 to over 8 million last year. That’s a 900% increase in just two years. A recent executive survey found eight in 10 executives are concerned about AI-driven misinformation impacting their brand. Yet many admit their companies aren’t fully ready to detect or respond.
A University of Melbourne/KPMG global study of 48,000 people across 47 countries found 87% want stronger laws to combat AI-generated misinformation. And a survey found that fewer than four in 10 Americans say that they can confidently spot AI-generated content, and 88% say it’s harder now than a year ago to tell what’s real online.
So who’s fighting back and how? Sophisticated newsrooms — think the New York Times, Bellingcat, investigative outlets worldwide — are now using multi-layered verification: a combination of reverse image search, metadata analysis, and geolocation cross-referencing to authenticate content. Reporters are using AI itself as a detection tool, analyzing thousands of posts to detect bot behavior by identifying patterns in timing, repetition, and network activity.
Beyond individual newsrooms, the Coalition for Content Provenance and Authenticity, that’s the C2PA, is building broader infrastructure. They’re backed by Adobe, Microsoft, the BBC, Google, Meta, OpenAI, and others. With that backing, they’ve developed an open technical standard that functions like a nutrition label for digital content, establishing its origin and edit history. The U.S. Cybersecurity and Infrastructure Security Agency endorsed this approach in January last year. Adoption is still limited, but the standard exists and it’s worth watching.
There’s also a striking research finding from a field experiment with readers of the German newspaper Süddeutsche Zeitung. Exposure to AI-driven misinformation reduced overall trust in news, but actually increased engagement with highly trusted sources. As synthetic content proliferates, credibility becomes scarcer, and as a result, becomes more valuable.
That finding has direct implications for us in organizational comms. A deepfake of your CEO, a fabricated press release, a manipulated earnings statement — these are no longer theoretical. A hacked news tweet in 2013 briefly erased $136 billion from the S&P 500. The tools to do something far more sophisticated are now consumer grade.
Deepfake fraud attempts grew by 3,000% in 2023, and humans detected manipulated media only 24.5% of the time. So practically: monitor for impersonation of your executives and brand. This belongs in your communications infrastructure. It’s not just an IT thing. Establish a verify-first culture inside your organization. Have pre-drafted response templates ready for the scenario where fake content goes viral under your or your organization’s name.
And invest in your organization’s credibility before a crisis arrives, because that research finding tells us audiences under information stress return to the sources they already trust. The newsrooms dealing with this are systematic. They document their processes and when they can’t definitively authenticate something, they say so. That’s the standard every comms team should hold itself to.
Neville, I know you’re watching all of this from across the Atlantic where the EU AI Act is pushing content labeling into requirements under law by August 2026. Are organizations taking this seriously? And is this regulatory pressure in Europe making any difference?
Neville: To your last point, I don’t think it’s making waves-type difference. Awareness is rising. I’m seeing more people talking about this topic online across Europe, here in the UK too. But I think it requires far more and more effective communication to bring the messaging home to people about this huge topic. So it’s early days.
We’ve got debate continuing here in this country about online safety and all these other issues that kind of obscure some of the important details such as this, for instance, that does require further debate. Things that I pay attention to certainly are the broad debates about all of this, but seeing what people are doing. You mentioned some examples in your introduction about some media broadcasters in particular, what they’re doing to verify the veracity of content. I saw an excellent article the other day about what Wikipedia is doing in this area, because there’s a place that’s at high risk of misinformation and disinformation.
But there’s no uniformity from what I’ve seen, certainly. There’s lots of homebrew solutions people are suggesting. There’s lots of good solutions some respected organizations are suggesting that you do, but there’s not a big groundswell of action on this yet, it seems to me. So I’d be interested myself even to hear what listeners in the UK and across EU countries have to say about what they’re seeing in this area. But I don’t see a huge amount of conversation going on about this.
Shel: And I’d really appreciate, listeners, if you’re in organizations that are doing anything to identify misinformation and to catch it before it’s used or even redistributed — what are you doing? How are you going about that? Is there any infrastructure for this that’s being implemented? I’d really like to know because I think this is going to become a bigger problem faster than most people are aware of.
Neville: Yeah, I mean, one thing I am seeing talk about that caught my attention quite dramatically is the amount of fake news in a broad sense, but misinformation, particularly about the war in Iran, the use of video that is simply fake. I’m also seeing the use of video that isn’t fake and being highlighted as the fact that it’s not fake.
The reality though is that like most things you encounter online, how do you really know? And what do you do if you see something you think, I’m going to share that with my network? What do you need to do before you do that? Most sensible people will take those precautionary steps, the most fundamental of which: how do you trust what you’ve seen? Is the source credible? Is it a reliable source? If it’s a media property, or even before that, who else is talking about this?
So these are things that I do as a matter of course now on almost everything I encounter online, particularly if I’m thinking of sharing it. I’ve yet to be caught out by not doing that. I make it a point, and partly it’s affected by the fact I’m doing less of that than I was before a couple of years ago, far less. I don’t post a lot on social networks, except stuff that I think is really interesting to share with people who follow me, or just because I feel like I want to share this because I think it’s interesting.
And that works. No other heavy message behind any of this stuff. But I do carry out due diligence. And I think I do it reasonably well because I’ve yet to be caught out. Now, of course, someone listening to this might say, well, let’s test him out on something then. OK, fine.
Shel: Now that we’ve heard you say this…
Neville: So, right. Go for it and do that. Let’s see how we go. But I think this is the status of where we’re at. The changes that are happening because of the events that are happening, and the fact that these euphemistic bad actors are increasing — there’s more and more of them. We have events taking place in the world now, note what’s going on in the Middle East, that lend themselves to more of this. You’ve got to really do your due diligence on things that you might not have felt you needed to before.
Shel: Yeah, and I think due diligence needs to go beyond the tools that can detect a deepfake. You’ve got to remember that people were sharing content that was disinformation before there was AI. So you run your algorithm, you put a video through a tool and it says, yep, this is real video, it’s not AI generated — but it’s claimed that that video is showing something from the Iran war when in fact the video was shot years ago during, say, the Iraq war, and somebody just grabbed that video clip and made the claim that this is from the current conflict. This happens all the time. It still happens today. It’s not from this weather event. That’s from that weather event five years ago.
So we have to be diligent and not just rely on the tools, and we have to come up with some solutions. I remember years ago when we reported it here, when blockchain was still a topic of conversation in digital circles, Ike Pigott had recommended a tool. I don’t remember exactly how it worked, but as you shot video, it was recorded into the blockchain, which would authenticate its authenticity. And that became a way for people to see that it was genuine video and not manipulated somehow and not a deepfake — it was actually shot on a video camera and uploaded as a blockchain record in real time. So there are potential solutions out there. We need to get serious about implementing them in this profession.
Neville: Yeah, that’s a good example of the blockchain one, although that was pretty niche. That was pretty out on the edge, as it were. There were lots of things like that that just didn’t survive and disappeared. Things change, things evolve, and people are trying new things. I don’t mean bad guys, but in a good way. So let’s see how that goes. But you need to keep vigilant on all this.
And by the way, when I mentioned misinformation, I wasn’t thinking of deepfakes and that kind of thing. It’s more the fundamental stuff that crosses your screen every day or your newsfeed or whatever it might be, saying something that someone says something or someone has done something and it’s interesting and fine. Don’t trust it until you verify it. So if it’s on the BBC or CNN or any other broadcaster, you know, Süddeutsche Zeitung newspaper, the one you mentioned earlier, Shel — that’s a good bet that it’s OK.
But you know what? Some media recently have been caught out with fakes. So it still pays to do your own due diligence, particularly if that content is something you’re going to use in a way that could embarrass you if it turned out to be fake or simply wrong. So it’s worth doing. Most people think that they don’t have time to do that. You have to make the time. This is part of your future.
And AI has a role here. Arguably, you could say, well, I need to do this myself. No, you don’t really. Your favorite chatbot, if you trust it, it knows enough about you, and you can still verify stuff. It does the searching and finding the sources. You then check them. It can check them too, but you still have to do that. It just makes it easier for you to do that. You still want to do that work, by the way. There’s no magic bullet or shortcuts here. So it’s worth it. You learn a lot doing this, too. I’ve learned huge things from doing all this myself. And it’s been very, very useful.
Neville: So there we are. OK, let’s talk about bot traffic. In an interview at South by Southwest, literally a week or so back, with TechCrunch, Cloudflare CEO Matthew Prince said that by 2027 — so as you pointed out earlier, we’re eight months away basically — bot traffic will exceed human traffic on the internet. That’s not entirely new in principle. Bots have always been part of the web. But what he’s describing is a change in scale and function.
Now think about this: Cloudflare — I don’t have the exact number, but don’t they manage like 30% of all the traffic on the web that goes through some of their servers somewhere? They do caching. They do all sorts of interesting things with people’s data. I use it on my blogs. I’m sure we use it on the FIR network. I mean, it’s part of the plumbing of the internet now. And you might remember a month or so back, Cloudflare was all over the news because they were hit by a distributed denial-of-service attack or some such that took large chunks of the internet offline because people like Amazon and some of those big properties use Cloudflare too. So it’s quite something.
Anyway, historically bot traffic has been relatively stable, around 20%, largely driven by search engine crawlers. What’s changed is the impact of generative AI, said Prince. His point is that AI agents behave fundamentally differently from human users. A person researching a purchase might visit a handful of sites. An AI agent performing the same task might visit thousands of sites. This is not incremental growth. It’s a multiplier effect — not just more traffic, but a different kind of traffic.
That has consequences at three levels: infrastructure, economics, and behavior. First, infrastructure. If AI agents generate orders of magnitude more requests than humans, then the web becomes a system that increasingly serves machine activity. Prince talks about the need for new infrastructure, including ephemeral sandboxes where agents can execute tasks without overwhelming the broader network.
Second, economics. The commercial web has been built around human attention: visits, impressions, and clicks. If a growing share of traffic is non-human, that model doesn’t just weaken — it becomes misaligned with how the web is actually used.
Third, behavior. Prince characterizes this as a platform shift comparable to the move from desktop to mobile. If that’s right, then the way information is discovered, consumed, and acted upon changes fundamentally — and not necessarily by humans.
That raises a set of implications that go beyond infrastructure. If machines are increasingly intermediating access to information, then visibility is no longer just about being found by people. It’s about being processed, selected, and used by systems. This links back to the earlier themes. We talked about how AI changes what work is worth. We followed that with how AI changes what and how work is measured. Here, it’s changing the environment in which both of those things happen.
So this is less about traffic and more about control — who or what is actually navigating the web. Which leads to some important questions. If AI agents are doing more of the searching, what does it mean to be visible online? If traffic no longer equates to human attention, how do organizations think about value? And if this is indeed a platform shift, what replaces the current models that underpin the web?
Shel: These are interesting questions, and I think that this is ultimately more a matter of evolution, just like the web was, even the internet before we had the graphical interface of the web. It’s a shift in what’s doing what. But at the end of the day, all of those bots have been deployed by whom? I mean, I have agents out there. These are just set up on Claude and on ChatGPT that are going out and doing searches and coming back and giving me reports. Me, I’m a human, last time I checked.
And I’m using the results of the work that those bots do. So these agents are proxies for the humans who need something done with this information, whether it’s delivering a report or creating a spreadsheet or what have you.
These are human-deployed bots. I mean, ultimately in every case, a bot has been deployed by somebody for some purpose. And I think having your content out there for those bots to find so that those results are delivered back to the human and you’re visible there — all it’s doing is reducing the need for the human to sit there for hours doing the searching and just having the AI go out and do the searching for them and delivering back results. But those results are still being used by people.
So this doesn’t concern me all that much, unless there’s something going on here that I’m not aware of with agents suddenly creating themselves to go off and engage in activities that have no human behind them, in which case we’re in the realm of science fiction. And I don’t think we’re there yet.
Neville: Well, that could be the case, although I think there are signs that we might be heading in that direction. Thinking about what we talked about in the last episode on that darker place that you cited, Ethan Mollick talking about what happens if it all gets taken over by an AI — that question applies here as well. You’ve got the AI agent instructing other AI agents. And I read someone talking about that very topic in quite a compelling way that this is already happening. So that wouldn’t surprise me one bit at all. So we’ve got to think of that too.
Shel: Yeah, now we’re talking about two different things, right? I mean, we’re talking about bots and agents here as an umbrella topic. But the fact that bots have been deployed to search and report back is one thing. Bots that are creating content is another, which is actually the topic of my next report.
Neville: Got it. Yeah, you’re absolutely right. We were talking about bots. So they are deployed by humans to achieve certain things. I guess I could project that out and say what happens in a darker place where the bots are deployed by AI agents unbeknownst to the human. I mean, I’m not Skynetting here, by the way. This is just projecting the thought out. And I welcome these kinds of discussions on “what if” when we see what’s happening now. It immediately makes you think, yeah, but what if? So this is part of how we generate good conversation about this kind of topic.
But it is interesting. I think the way in which Matthew Prince kind of framed it — that someone does a search for something in a retail outlet online and he or she may do a couple of dozen searches, but the AI instructs a bot to do this and that bot goes out and there’s thousands of searches all in a short period of time. And you suddenly see, wow, the scale of this is absolutely phenomenal. And that’s really, I think, part of what Prince is arguing: when bot traffic overtakes human traffic, we are confronting a scale of an order of magnitude that is driven by the system.
Is he ringing alarm bells here? I’m not sure that he is or not, but he’s looking at the need for a new kind of infrastructure to take care of this. And I think that’s actually a good avenue to explore.
Shel: Probably. I mean, Google has always used bots to go out and scour the web — called them spiders back in the day. But they only sent out the one and it found everything, those millions and millions of sites. And all that information resides on Google’s servers. So when you’re doing a search, it’s not going out onto the web, right? It’s looking in its own data centers and giving you those results. And those spiders, those bots, are always out there, always running, but just the one from Google.
Now with AI, you’re asking it to go out in real time and scour the web. So yeah, it’s sending out thousands in order to do essentially the same work that Google did. And then it brings you back the result in that narrative output that you get. So that’s why we’re seeing so many more bots out there. Is this a problem? I’m not an engineer, so I don’t know.
Neville: No, I don’t know either. I’m not sure it is a problem. But I’m cognizant, paying attention to what Prince is saying, that none of this is incremental growth — it’s a multiplier effect. And could it be that we’re at risk of everything grinding to a halt? Is that what he’s saying?
The consequences I listed — infrastructure, economics, and behavior — make sense, and they are connected. The generating of orders of magnitude more requests than humans are capable of doing is partly the thing. And I can see that. The web is then a system that increasingly serves machine activity, which is how he’s making that connection. He talks about the need for new infrastructure, including sandboxes where agents can execute tasks without overwhelming the broader network. That makes a lot of sense.
Shel: Yeah, I like that. Nothing wrong with that.
Neville: I use sandboxes myself, so I understand conceptually what that means. The economics about it all, where the behavior is now totally different. Visits, impressions, clicks — that’s what humans did, or still do largely. But as he argues, if you’ve got a growing share of this, increasingly more non-human traffic according to Prince, that model doesn’t just weaken — it becomes misaligned with how the web is actually used today.
OK, does that mean we need to change that? Well, yes, it does. How do we do that? Well, that’s part of the bigger debate. Behavioral characteristics — he’s likening this to the move from desktop to mobile. If he’s right, then the way this is all discovered, consumed, and acted upon changes, not necessarily by the humans, changed by the AI. Is this a bad thing? I don’t know. Maybe he’s just ringing the hand of caution and ringing the cowbell. Maybe that’s it. But it certainly is provocative what he’s suggesting.
Shel: Yeah, certainly there’s absolutely going to be more bot traffic on the internet. That’s inescapable with all of this. Maybe the LLMs, the labs, find ways to confine the searches so they’re searching relevant sites to reduce that traffic. I don’t know.
Neville: Yeah. So let’s hear about your connection piece then about this. Assume that humans are not at the heart of all of this.
Shel: Sure. And you mentioned Ethan Mollick earlier. I mentioned this in an earlier episode a couple of weeks ago, I think. But he said that when he posts something, he can tell that about 70% of the comments that are left on his posts have been generated by bots. And it’s weakened the value of LinkedIn to him, which is discovering smart people with intelligent thoughts and perspectives. And 70% of that is now being generated by bots.
So we have bots that are now creating content. So you talked about bot traffic — stay with that theme, but focus more on the content. A new peer-reviewed study just published in the Journal of Public Relations should be required reading for anyone responsible for managing an organization’s reputation and messaging. The paper is titled “Social Bots as Agenda Builders: Evaluating the Impact of Algorithmic Amplification on Organizational Messaging.” And it came to my attention by way of Bob Pickard, one of Canada’s most respected PR practitioners and someone whose commentary on this research carries special weight. More on that in a minute.
The research, led by Philip Arceneaux at Miami University, along with colleagues from the University of Arizona, University of Texas, and University of Florida, is the first study in public relations scholarship to empirically measure how social bots interfere with organizational messaging. The authors note they found no prior PR research addressing this specifically, which is remarkable given how long the threat has been visible.
The study analyzed nearly 900,000 tweets generated during Ohio’s 2022 midterm elections. What the researchers found was that social bots successfully influenced the agenda formation process, most heavily in negative tone and most notably among the election campaigns. Bot messaging was most effective at influencing attribute salience — that is, how issues were framed and characterized — driving primarily negative sentiment. The bots were the strongest influencers of campaign agendas with measurable downstream influence on press and public discourse.
Here’s the distinction that Pickard zeros in on in his commentary. And I think it’s the most important insight in the entire body of research. The bots didn’t control what was discussed. They controlled the tone in which it was discussed. And as Pickard writes, that may be a more dangerous lever. Your organization puts out a carefully crafted message. The bots don’t need to invent a counter-narrative. They just need to inject enough negativity around yours that the frame gets corrupted before it can set.
A primary strategy social bots adopt is the creation of information disorder — information ecosystems filled with suspicion and distrust that erode public confidence. And as Pickard observes, this has a direct downstream effect on communications decisions. Distorted inputs produce distorted decisions. If your social listening is picking up manufactured sentiment — bot-driven negativity masquerading as genuine stakeholder concern — you may be prioritizing the wrong issues, reacting to the wrong pressures, and in some cases, misreading your stakeholders entirely. Some of what looks like groundswell may just be a bot farm.
The asymmetry that Pickard describes is sobering. A small network of automated accounts can systematically degrade the messaging environment of a well-funded organization with a full communications team. And as lead researcher Arceneaux put it, it’s not natural selection anymore — it’s artificial selection by who controls the most bots.
A survey cited in the study found that 51% of leading communication professionals already reported that social bots present a clear threat to organizations and their reputations. And practitioners view social bots as the most pressing ethical challenge in public relations. And that was before generative AI made bot-produced content dramatically more convincing.
Why does Pickard’s voice matter here particularly? Well, when he blew the whistle on the Chinese interference at the Asian Infrastructure Investment Bank in 2023, hundreds of pro-China bots on Twitter targeted him with insults, accusing him of being an American agent, a white supremacist, and a neocolonialist. The pattern the researchers describe in the study — rapid negative amplification, coordinated framing, and agenda hijacking — isn’t abstract to Bob. He has operated inside of it.
And his observation that state-directed information operations seem to understand the bot asymmetry better than most corporate communications leaders is a pointed challenge to our profession.
The study recommends stronger media relationships, better investment in bot detection tools, and a return to traditional polling as a signal less susceptible to manipulation. And that’s sound advice. And on the practical side, research on bots’ impact on public discourse suggests their influence is most pronounced in the early stages of an issue — before credible sources establish the dominant narrative. Which means getting your authentic message out fast, before the negative frame hardens, is now a genuine strategic imperative, not just a good practice.
There’s also a real-world corporate illustration of this dynamic, and it’s one that we talked about more than once. In 2025, research found that roughly half of all the posts about the Cracker Barrel controversy in its early days were driven by inauthentic bot activity. So a minor design story artificially elevated into a culture war flashpoint before human communicators could get their footing. That’s the playbook now.
Neville, I know you follow this activity and information disorder closely and you’ve watched platform governance response in Europe in particular. What do you think? Are social platforms doing enough to protect organizations from bot-driven agenda hijacking, or are communication professionals essentially on their own here?
Neville: I don’t think they’re doing enough. They are doing some, the platforms, but their attention is not on this at all. I think any organization, any corporate communicator, needs to recognize the fact that — regard it as if you’re on your own, that you need to take the steps that are needed.
Reading Bob’s piece on LinkedIn, an interesting turn of phrase he uses here, talking about “hands-on combat experience versus synthetic competitors gaming the algorithm in contested environments” is now extremely important. So make of that what you will, but you need to be up to speed with these developments. There are plenty of places you can get information from, get insights and guidance from as well.
I think, though, that this is the fundamental point which Bob Pickard makes in his piece: some communication leaders are still fighting the last war. This new research soberly explains new realities of possibilities of modern PR battlegrounds.
Now, I have not read the article, Shel, that you had in our Slack channel. I mean, it’s 34 pages of eight-point type, it seems to me. It’s big. So I would get my AI assistant to summarize the whole thing for me and give me the highlights. I haven’t done that. I think I will do that even to get a good understanding of this.
It seems to me that this is yet another example of the changes that are happening, whether we like it or not, that we have to pay attention to as communicators. We’ve touched on quite a few in this discussion today. Here’s another one. So I can’t really comment more than that, Shel. I’ve not read the report, which I am going to do. But I think his intro to the piece on LinkedIn is good. It’s a good introduction to it. And it then makes it easier to try and wade into it. Although I think for most communicators, some kind of summary is what they’re going to need rather than trying to read the whole thing.
Shel: Yeah, well, the bottom line is, I think, pretty simple. If you release some information and it’s in somebody else’s interest to shift the tone in order to control the agenda, then those bots are going to be deployed very, very, very quickly and create that content that changes the framing of what you started. Because you had a communication goal, and you as a communicator need to be prepared for that. And you need to have processes in place — and these are new processes and new workflows — to make sure that what you want people to understand is the message that fixes in people’s minds before these bots can come in and mangle your message, because that’s what’s happening pretty routinely now.
Shel: And that will be a -30- for this episode of For Immediate Release. We do want to remind everybody again, because we mentioned it earlier, comment on what you’ve heard. If you have thoughts, if you have any experiences to share, if you have questions, share them. The place most people are doing that these days — and in fact, every comment that we shared today was left on the LinkedIn posts where we announced the availability of a new episode. So if you follow Neville or me on LinkedIn, you will get those notifications of those new episodes. That’s the place to comment.
You can always comment on the show notes. That’s where people used to do this all the time. Remember blogs when people used to comment on blog posts? You could do that. You can send us an email to [email protected].
Shel: Boy, am I overloaded with spam in that account, but absolutely not one comment in the last month. One of the things I find in that email account is any voicemail messages that you have left. Just by going to [email protected] and clicking Send Voicemail, and you can send us your comment that way — we’ll play it. We’d love to have another voice on the show. So you can also send us an audio that you record, just attach it to an email and send that to [email protected].
We also have the FIR community on Facebook. And there are lots of places that you can tell us what you think. We’d love it if you did. And we will share that on the next monthly long-form episode. That next monthly long-form episode is coming on Monday, April 27th. Neville, you and I will record that on Saturday, April 25th. So we will have our monthly episode then. Between now and then, not this week, but starting next week, we will have our shorter-form one-topic weekly episodes. It should be three or four of those before we get to the April long-form episode. And that will in fact be a -30- for this episode of For Immediate Release.
The post FIR #506: Battle of the Bots! appeared first on FIR Podcast Network.
23 March 2026, 7:05 am - 21 minutes 13 secondsFIR #505: Social Media’s Big Shift
In FIR #505, Neville and Shel dig into Hootsuite’s Social Media Trends 2026 report, which argues that social media is no longer just a communication channel — it’s morphing into a search engine, cultural radar, and real-time research tool. They explore what it means for communicators when younger audiences treat TikTok and Instagram as their primary discovery platforms, and when Google itself starts indexing social content. The conversation also tackles “fastvertising” — the growing pressure on brands to react to cultural moments within hours — and whether that speed actually translates to bottom-line results or just burnout.
The discussion takes a provocative turn when Shel raises Ethan Mollick’s warning that public forums are being systematically overrun by machine-generated content, with research suggesting one in five accounts in public conversations may be automated. They weigh the AI paradox facing communicators: generative AI has become table stakes for social media production, yet 30% of consumers say they’re less likely to choose a brand whose ads they know were AI-created. Neville and Shel agree that social media can serve as both a publishing channel and a listening tool — but only if human-to-human communication can survive the rising tide of bot-generated noise.
Links from this episode:
- Social Media Trends 2026 | Hootsuite
- The 18 social media trends to shape your 2026 strategy
- Sferra Design video on Social Media Trends report | Instagram
- World-first social media wargame reveals how AI bots can swing elections
- AI bot swarms threaten to undermine democracy
- B2B Social Media Trends and Predictions for 2026
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel: Hi everybody, and welcome to episode number 505 of For Immediate Release. I’m Shel Holtz.
Neville: And I’m Neville Hobson. Social media might be going through its biggest change since the rise of the news feed, and it’s happening quietly. Platforms that started as places to connect with friends are increasingly acting like search engines, cultural sensors, and even market research tools. It’s been a while since Shel and I talked about social media on the podcast, and frankly, that’s partly because the conversation often feels repetitive. New platforms appear, algorithms change, someone declares the death of Twitter again. That’s the kind of format that we seem to be following. But every now and then, a report comes along that suggests something deeper is happening. Hootsuite’s new Social Media Trends 2026 report published last month argues that social media is no longer just a communication channel. It’s becoming something much broader — part search engine, part cultural radar, and part market research lab. Take search, for example. Younger users increasingly treat platforms like TikTok or Instagram as search tools. Instead of Googling “best coffee shop in London,” they search TikTok and watch short videos from real people recommending places to go. And now Google itself has started indexing Instagram posts and surfacing short-form social video in search results. The line between social media and search is starting to blur. At the same time, we’re seeing a strange tension around artificial intelligence. According to the report, most social media managers now use generative AI tools every day to write captions, brainstorm ideas, edit images or video. But audiences are increasingly suspicious of content that feels automated or synthetic. More than 30% of consumers say they’re less likely to choose a brand if they know its ads were created by AI. So brands are in a curious position. AI is becoming essential behind the scenes, but the content that performs best often needs to feel unmistakably human. And culturally, social media itself is fragmenting. The report points to what it calls Gen Alpha Chaos Culture — absurd memes, distorted audio, and intentionally chaotic editing styles that dominate TikTok among younger audiences. Meanwhile, older audiences — that’s you and me, Shel — are gravitating towards almost the opposite aesthetic: nostalgic references to the ’80s and ’90s, calming, cozy content, and even posts about slow living and digital detox. I do some of that, but I also do the other stuff too. So it’s hard to pigeonhole me, I have to tell you that. So reading this report left me wondering something slightly provocative. Maybe social media isn’t really social anymore. If discovery is driven by algorithms and search behavior rather than who you know, perhaps these platforms are evolving into something else — systems that surface information, culture, and trends in real time. Which raises the bigger question for communicators. Are we still thinking about social media as a place to publish content? Or is it becoming something much more powerful — a tool for understanding behavior, culture, and trust as it unfolds online? Which leads me to a first question. If people increasingly discover products, places, and even news through TikTok or Instagram rather than Google, does that fundamentally change how communicators should think about social media?
Shel: I absolutely think so. I mean, this shift deserves way more attention, I think, than it’s been getting from marketers and communicators. We’re looking at a fundamental change in how people get information. The rise of social media as a primary search engine — this is not a fringe behavior. In 2026, this is going to be the dominant reality for a massive swath of the population. Brands are just starting to get their arms around AEO. And now they’re going to have to apply the same efforts to social content that they’ve historically reserved for traditional search engine optimization. So captions and alt text and subtitles aren’t going to be nice-to-haves. These are the bedrock of discoverability. And there’s a specific angle here for those of us in internal communications too. I mean, if employees are using TikTok and Instagram the way they used to use Google to make personal decisions, we have to ask if that behavior is bleeding into their professional research. And there’s data that suggests it is. A company called Alpha P-Tech did a study and found that 75% of B2B buy-side stakeholders are going to use social media to gather information about vendors and solutions this year. So this isn’t just a consumer trend. This is a professional evolution too.
Neville: Yeah, I would agree with that, I think. I mean, there’s a lot to unpack here from Hootsuite’s report. And I think it’s, you know, I throw out thoughts that occurred to me when I was reading this. It talks about something I’d not encountered before, whether you have — fast, if I pronounce it right, even it’s a manufactured word — fastvertising. So the word “fast” with “vertising” from advertising, right? Fastvertising. The question actually is, does the fastvertising culture create more risk for communicators, things moving so fast, where, according to Hootsuite, brands now feel pressure to react to trends within hours, if not less than that even? So reacting too quickly can lead to tone-deaf, poorly thought-through posts, I would say, as does Hootsuite, in fact. Are we moving into a world, then, where social media requires newsroom-style judgment and governance? What do you think?
Shel: Well, yes, and I think we’ve been there for a while. We remember the — what were they called — the war rooms that social media teams for various brands were using. Remember Oreo during their 100 Days of Oreo several years ago now. And they had a newsroom that was looking for trends so they could take the one that was planned based on somebody’s birthday. And if something major happens, they could just switch it up and really quickly knock one out that was relevant to what was in the news. I remember they had one cookie that had black and white stripes. And it turned out that it was related to a National Football League referee strike that had just been called. So yeah, I think brands have gotten accustomed to monitoring trends and knocking stuff out fast. Another one was, I think it was the tequila with the chocolate beans, that they pulled that out of Google Trends and said, let’s get that out there while this is a hot trend. And it was up and it did really well, that particular post from whatever tequila company it was. So this is something that I think brands, a lot of them anyway, are already accustomed to. I think the scale that we’re talking about here though is probably not good. I think if you’re reacting to just what you happen to see and not running some analytics, you risk being tone-deaf by jumping into a conversation that turns out to be not that big a deal. You risk saying something that is incongruous with the tone of the conversation because you rushed. I guess the only benefit you get out of this is the fact that everything’s moving so fast that in six hours, no one’s going to remember what you did.
Neville: Yeah, Hootsuite talks about this in the context of fastvertising. Obviously, the word du jour for this thing that’s been around a while is disrupting the content calendar. To that point, online brands are now responding to cultural moments within hours, not days. 22% of marketers feel pressure to respond to trending topics or viral moments daily or a few times per week. And 37% feel a high level of burnout from that pressure, according to data from Adobe quoted by Hootsuite. Timing matters, they say. If you’re quick, you’re in. If you’re slow, you’re a laggard. But you still can’t prioritize speed over quality. And they cite 39% of marketers say their content flopped due to rushing. So being the fastest isn’t necessarily the answer. Yet that’s what a trend seems to be building further, that fast is the important thing, being fast.
Shel: But the thing is that even if you’re adept at this and you really have your finger on the pulse or you have a big enough team that there’s somebody there who has their finger on the pulse and can craft just the perfect post to be part of whatever this is that’s going on at the moment. And let’s say it’s a big success. It goes viral. Does that translate into sales? Does that translate into bottom-line results or are you just one of the cool kids participating in the conversation? I’d like to see the correlation at least between being fast and being good at being fast with this fastvertising and getting the kind of results that pay the bills and incentivize the leaders of organizations to fund these kinds of efforts.
Neville: So being fast and furious isn’t necessarily the solution. OK, I get that. Let’s talk about algorithms. Hootsuite talks a bit about this, which I found interesting. If algorithms prioritize behavior over followers, which is what Hootsuite is saying is a trend that’s developing, does brand loyalty matter less? That reminds me of, I think, a very related theme to this we discussed probably five or six episodes ago about brand loyalty mattering less in certain circumstances. So if content reaches people based on micro-behaviors, asks Hootsuite, rather than follower networks, the old idea of building large follower communities might be fading. So they ask, is the new game about relevance rather than loyalty?
Shel: Well, I think relevance has always been at the heart of what we do. I mean, you can build a huge base of followers, people who have opted to get your content, and they’re very casual about what they see. We saw this data in the early days of the news feeds as a forum for brands — was that you had one brand that had a million followers, but they hardly ever came back and looked at your stuff again. And then you had a competing brand with fewer followers, but they were constantly engaging with the brand. Which would you rather have? So I think having brand loyalty can be valuable if you’re engaging that base rather than waiting for them to get your content in their feed because that’s growing less and less likely. If that’s an effort you’re not willing to make or you don’t think will pay off, then yeah, brand loyalty is going to take a backseat to getting the impressions through other means. But again, I want to see that line that connects those impressions, even that engagement, with your bottom line. Because I’m not convinced that this participating in this fastvertising environment has produced those results. I have not seen a study that shows that it is.
Neville: I think it’s interesting the kind of direction of travel that this seems to be pointing towards where one of the findings talks about engagement is no longer a big-deal metric. Even impressions aren’t. And that comes down to the ROI from micro-audiences. So it’s not clearly defined yet. This is still evolving. But it is shifting without doubt. So another point from the report. It suggests social listening and analytics are becoming real-time intelligence systems. They’re asking, is social media now the fastest research tool organizations have? Could social media become one of the most valuable organizational listening tools, asks Hootsuite, not just a publishing channel? That’s a big shift for communicators, they argue.
Shel: Yeah, I mean, and it has been for a while. And I think the type of activity we’re seeing now probably elevates that value. But is it the most important listening? You know, I don’t know. I think asking direct questions in a survey and a focus group still have tremendous utility. But if you’re looking for real time, again, this is — I think particularly valuable say in a crisis. And this could be a brand crisis rather than an existential corporate crisis. But finding out what people are thinking, what the sentiment is in close to real time can be ridiculously valuable in that kind of situation. But I also think that you have to remember that the people who are engaging in this kind of activity on social media is not necessarily the majority of your target market. There are a lot of them who maybe don’t do this at all, or they’re passive consumers of the content that’s posted online and not actively creating any of that content. And what do they think? I think, you know, if you put all of your eggs in the basket of what the number of people who are producing this content are saying and say, well, this is going to drive the perception of our brand, it’s going to drive sales — yeah, I think that’s very, very risky. I think as an element of a marketing program, of a communication program, it can be useful. But the way some organizations are looking at this, apparently from what I’ve seen, is that this is now the be-all and end-all of their online marketing. I’m not sure that’s wise. I think still, you know, publishing thought leadership pieces on LinkedIn still has some value, right?
Neville: That’s — that’s probably — I hesitate because I’m trying to remember. I read something about this just the other day which says it’s not at all — has no value doing that on LinkedIn, and it gave some reasons. I don’t remember, obviously not compelling enough to make me recall the article or the author. But I’ve seen many different opinions and different takes on, you know, where’s this all going that it’s hard to settle on one, I suppose, which I think makes this quite an interesting landscape for discussion, really, to get some good debate going. It’s interesting Hootsuite’s look at the role AI is playing in all of this, where they say AI might make social media less interesting. The paradox around generative AI they talk about and the report saying AI is now table stakes for social media production. But I’m wondering if that actually makes social media less interesting. If everyone has the same tools, generating ideas, writing captions, and editing video, doesn’t that push everything towards the same tone and style? Doesn’t it kind of make everything just bland as hell?
Shel: Yeah, slop, right? I think there’s two ways to look at it.
Neville: Well, not necessarily slop — not necessarily slop, just the sameness across the board.
Shel: Yeah, I think that’s how some people look at slop. But there are a couple of ways to look at this. One of them is just the data. According to the Hootsuite report, 30% of consumers say they’re less likely to choose a brand if they know the ads were created by AI. We saw this, by the way, with the Super Bowl, where there was backlash aimed at the ads that were generated with AI. So there’s a practical takeaway for communicators, and that’s use it for infrastructure and not as the voice of the organization. The moment your messaging starts to sound like it was spat out by a machine, you’ve sacrificed the very thing that social media was built for, which is trust. But I want to take this to a bit of a darker place than what was covered in this report. This was a post by Ethan Mollick on LinkedIn. And he shared a perspective that I think should make us stop and think about this. He’s concerned that the public forums are being systematically overrun by machine-generated content. He said that while established voices can remain in broadcast mode, we’re losing that serendipitous discovery — the ability to find smart human insights in the comments on LinkedIn and presumably on Facebook and the other networks. And I’m not being an alarmist here. The University of New South Wales did a study and found that in a simulated social media campaign, more than 60% of the content was generated by competitor bots, surpassing 7 million posts. There was a peer-reviewed analysis last year that estimated about one in five accounts in public conversations were automated, and we’re seeing the emergence of AI overwhelm. That’s a label for a phenomenon where the sheer volume of machine-generated noise leads to a systematic breakdown in trust. Now consider Multbook. You remember Multbook. This is the platform where the AI agents from whatever that is called this week — it’s gone through so many name changes — but it was the tool that you could set up on a computer that would deploy agents. People were running out and buying Mac Minis to do this because they didn’t want it on their computer having access to their bank accounts and the like. Somebody built Multbook where the agents that were deployed by this thing could interact with each other and we, the humans, could sit back and observe. And Professor Mollick wondered whether LinkedIn was going to become Multbook with a LinkedIn logo. We’re building the infrastructure for bot-to-bot communication. And we should be asking whether human-to-human communication can survive at all. If all of these shifts in what the Hootsuite report says we can now use social media for — in a year, if it’s been overrun by AI content, and we’re talking about the bots creating original content in response to posts and then creating posts — we’re not going to be able to use it for much of anything at all.
Neville: Yeah, that’s taking it to quite a dark place, Shel. I don’t think that’s the likeliest outcome, of course. So let me circle back to the first question then. When we started this conversation, we asked this one then. So are we still thinking about social media, generally speaking, as a place to publish content, which is what we currently do, right? Or is it becoming something much more powerful — a tool for understanding behavior, culture, and trust as it unfolds online? How do you see it?
Shel: We’ll see.
Shel: The answer to that is yes. I mean, it can be both things. I would not recommend that brands and companies stop publishing content, especially when people are starting to use these tools for search. I mean, man, you talk about TikTok being used for search. I do. When I’m looking for a new place to have breakfast, because I love a good breakfast, I’m not going to the usual places. I’m not going to Yelp. I’m not going to Google. I’m going to TikTok because I want to see somebody who created a video of this awesome breakfast they had at some restaurant a mile and a half from me that I’ve never heard of. So if you want to be discovered that way, you better have the content there. But we have to start using it in these other ways as well, for as long as that’s a viable thing to do.
Neville: I agree with that.
Shel: Well, in that case, that’ll be a 30 for this episode of For Immediate Release.
The post FIR #505: Social Media’s Big Shift appeared first on FIR Podcast Network.
17 March 2026, 9:21 pm - 22 minutes 45 secondsFIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag
Shel and Neville examine a troubling trend gaining momentum across corporate America: AI washing — the practice of attributing layoffs to artificial intelligence when the real reasons are more complex. The discussion centers on two high-profile cases. Block CEO Jack Dorsey announced a 40 percent workforce reduction, crediting AI tools, despite three prior rounds of cuts that had nothing to do with AI and pushback from former employees who say the moves look like standard cost management. Meanwhile, Oracle is cutting thousands of jobs, not because AI replaced those workers, but to fund a massive data center expansion that Wall Street projects won’t generate positive cash flow until 2030. Meanwhile, a new Anthropic labor market study adds context, finding limited evidence that AI has meaningfully displaced workers to date—though hiring of younger workers in exposed occupations may be slowing.
Neville and Shel dig into what this means for communicators who may be asked to craft layoff messaging that overstates AI’s role.
Links from this episode:
- Labor market impacts of AI: A new measure and early evidence | Anthropic
- What is AI Washing and Why Has It Been Linked to Layoffs?
- Block employees react to mass layoffs, impact of AI
- The US economy lost 92,000 jobs in February and the unemployment rate rose to 4.4%
- The Curious Case of the Block ‘AI Layoffs’
- Jack Dorsey Is Ready to Explain the Block Layoffs
- Oracle Plans Thousands of Job Cuts in Face of AI Cash Crunch
- Is AI really driving an increase in layoffs?
- Why Today’s AI-Driven Layoffs Are Becoming Tomorrow’s Rehiring Crisis
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Neville: Hi everyone and welcome to For Immediate Release. This is episode 504. I’m Neville Hobson.
Shel: And I’m Shel Holtz. Let’s talk about something today that should be keeping every communication professional up at night. We’re in the middle of a wave of layoffs where AI is being cited as the cause and the data suggests that in many cases that explanation is somewhere between incomplete and pure fiction. That puts communicators in a genuinely difficult position. You may be asked to help craft messaging that you have good reason to believe is misleading.
Shel: That’s a violation of codes of ethics. The stakes here are pretty high. We’ll explain all of this and what communicators should be doing about it right after this.
Shel: Let’s start with the numbers. News of the Oracle layoffs broke just last week amid news that the U.S. economy lost 92,000 jobs in February. And into that bleak backdrop, two major stories landed almost simultaneously. First, Block. Jack Dorsey announced that the company is cutting its staff by 40 percent, more than 4,000 people. The reason, according to his letter to shareholders, intelligence tools. Dorsey framed this as inevitable and even proactive saying, and this is a quote, “I think most companies are late. Within the next year, I think the majority of companies will reach the same conclusion.” But here’s where it gets complicated. Block had already undergone three rounds of layoffs since 2024 before this one. And in a previous round, Dorsey claimed that they were being made for performance reasons. AI, as far as I can tell, wasn’t mentioned at all, despite the fact that the same tools he now credits were already available and being used by employees. Former employees and analysts pushed back pretty hard on Dorsey’s assertions. One former Block employee wrote that the cuts “read like standard prioritization and cost management, not AI-driven reinvention.”
Shel: And another analyst was blunter, saying the vast majority of these cuts were probably not due to AI. Then, as I mentioned earlier, there’s Oracle, which is planning to axe thousands of jobs among its moves to handle a cash crunch. That cash crunch was created by a massive AI data center expansion effort. Now, this is a different kind of AI-related layoff. It’s not AI replacing these workers, but rather, we’re spending so much money building AI infrastructure that we can’t afford to keep paying these people. Wall Street projects Oracle’s cash flow will go negative for the coming years before all that spending starts to pay off in 2030. That’s workers losing their jobs not because AI took their role, but because their employer’s betting the company on AI and needs the payroll budget to fund that bet. Both cases are AI related. Neither is quite the story it appears to be on the surface. And that is the problem. And it has a name: AI washing. The term describes companies blaming layoffs on AI when the circumstances may be more complicated, like attributing financially motivated cuts to future AI implementation that actually hasn’t happened yet. A Forrester report argues that a lot of companies announcing AI-related layoffs don’t have mature, vetted AI applications ready to fill those roles.
Shel: Molly Kinder at the Brookings Institution makes the investor logic explicit. Calling layoffs AI driven is a very investor-friendly message, especially compared to admitting that the business is ailing. Even Sam Altman, whose company is arguably the reason any of this is happening in the first place, acknowledged all of this. He said, “There’s some AI washing where people are blaming AI for layoffs that they would otherwise do.” Now the data complicates the picture even more.
Shel: Anthropic just released a major labor market study. It’s worth your attention. They find limited evidence that AI has affected employment to date. Their new “observed exposure” metric, which tracks what AI is actually doing in real workplaces, not what it could do theoretically, shows that workers in the most exposed occupations have not become unemployed at meaningfully higher rates than workers in AI-proof jobs. There’s one exception worth watching: suggestive evidence that hiring of younger workers, particularly ages 20 to 25, has slowed in those occupations exposed to AI. The good news in the Anthropic research also serves as a warning. The reason we’re not seeing mass displacement yet is largely because actual AI adoption is just a fraction of what AI tools are feasibly capable of performing. The gap between theoretical capability and real-world deployment is wide today, but it is closing.
Shel: So what does this mean for communicators? Well, here’s the ethical minefield. When executives AI wash their layoff announcements, they may be revealing that they view AI as a means for eliminating jobs, and that could cause workers not to trust or even sabotage their future plans for AI adoption. Employee concerns about job loss due to AI have already skyrocketed from 28% in 2024 to 40% in 2026, and 62% of employees feel leaders underestimate AI’s emotional and psychological impact. Anti-AI sentiment is real and growing, and every time a company uses AI as a convenient cover story for financially motivated cuts, it feeds that sentiment, making the actual work of responsible AI adoption harder for everyone.
Shel: For communicators who are handed layoff messaging that overstates AI’s role, the guidance from ethics researchers is worth holding on to. Rather than vague claims about AI transformation, companies should provide specifics. How many positions are directly attributable to automation of specific functions? And how many reflect shifting market conditions and strategic realignment? Investors can handle complexity and so can employees. The Block situation is a canary in the coal mine, but perhaps not in the way Jack Dorsey intended. It’s a warning about what happens when the narrative outruns the reality, when the story told to shareholders diverges from the story experienced by the people being let go. Our job as communicators isn’t to make bad news sound good, it’s to make complicated truth navigable. That truth has never been more important or more difficult than it is right now.
Neville: A lot to unpack in that, Shel. I mean, absolute tons. I was curious, actually. One thing you mentioned, I think it was a quote, where you talked about, you know, referencing Sam Altman, where you said, you mentioned the phrase “AI-proof jobs.” What are those? I don’t think anything is AI proof.
Shel: Well, I think a gardener is an AI-proof job. A drywall installer is an AI-proof job. These are the ones that an AI can’t do. Even if you look at the definition that they’re throwing around for artificial general intelligence, it’s any cognitive task that a normal person could perform at their computer. And there are a lot of jobs. I mean, my son-in-law is a plumber and AI is not going to take his job anytime soon. So those are the AI-proof jobs.
Neville: That could be a good topic for a separate discussion, I think. I’ve got some different views. Anyway, one thing that struck me in everything you said is how often AI is framed as inevitable, as Jack Dorsey noted, almost like the technology made the decision. But organization leaders are choosing how and when to deploy AI. So do you think those leaders risk removing their own accountability when they say “AI made us do this”?
Shel: I think they do, even though that accountability is to the shareholders and they’re performing what they think the shareholders will like. I think what they risk losing is their credibility with shareholders who may find out down the road that they haven’t actually replaced these jobs, that they didn’t have the AI tools or agents in place to perform the duties of the people they let go, or have somehow rejiggered their workflows so that AI is picking up the slack for the people who are gone. But in the meantime, you can see the other reasons that they may have wanted to reduce the workforce, whether it’s on the balance sheet or competitive headwinds or whatever it may be. I’ve seen other arguments in various forums that Dorsey actually did this for other reasons and you can point to what those reasons might’ve been. And just blaming AI—as somebody said, the analysts and the investors like hearing that you’re cutting your workforce while maintaining your productivity and your current levels of production. That’s great. We want to see more of that. But if you dig under the surface, you look under the covers, you find out it probably isn’t true.
Neville: Yeah, I think that’s a big issue, frankly, the misrepresentation of this as a matter of course. And I’m just reflecting a bit on one of the webinars that Sylvie Cambier and I did for ABC recently on ethics and AI. That this features in that, in terms of dishonesty, misrepresentation, disinformation almost. So another thought I had was, if we accept that some of this is AI washing—and in fact, I say a lot of this is AI washing. It’s a great phrase, AI washing, great term. So the short description I found—Wikipedia has got a page on this, a huge description. But companies make overinflated claims about the use of AI. That’s as simple as we’re describing it, which is basically what you said in your intro.
Shel: I love it, yeah.
Neville: So my question is, what would responsible communication about layoffs actually look like? If communicators are faced with, I guess, continuing incorrect facts or rather the incorrectness of this, should organizations be separating out the reasons? In other words, providing even more information—automation, restructuring, investment—rather than rolling everything into an AI transformation story? Would that be better, do you think?
Shel: I think it would. And I think it’s incumbent upon the communicator to not just push back, but I think first to ask questions. You’re asking me to communicate this layoff as AI related. We’re laying off this many people. Can we demonstrate that those functions are being replaced by AI systems that are ready to do those jobs? Or is there another way that we can demonstrate that we can prove that we no longer need these people because of AI? Is there anything that people are going to look at in our performance, in our numbers, in the competitive landscape that they would be able to point to and say, look, that’s going on too. Doesn’t that have something to do with these layoffs? And to point out what the risks are of simply attributing everything to AI.
Shel: Both from getting caught when you haven’t actually replaced those people with AI functions—and you have people inside who are more than happy to blow the whistle on these kinds of things, especially when they fear that their jobs are next up for elimination because of all of this—and what it does to the internal situation. As I pointed out, people who see that jobs are being taken because of AI? Well, I’m certainly not going to support more AI in this company. I’m going to do everything I can to undermine that. So I think it’s our job to push back and to make sure that what we’re communicating is accurate. If there’s a way that we can communicate what leadership is looking for, great. If not, I would push back and say, we cannot do this. This is going to—do you want to engage in crisis communication in three months? Because that’s where we’ll be.
Shel: I mean, it’s what Dorsey’s doing now. He’s going around doing damage control interviews. So is that what you’re interested in? Damage control down the road? You know, we’ve been communicating layoffs for decades and decades and decades without having AI to blame it on. And somehow we managed to survive. Let’s just tell the truth.
Neville: Yeah, yeah, it strikes me as a very peculiar situation in a sense that if you look into it, the facts are quite clear. And why would you kind of obfuscate the picture and wrap it all up into something you can blame the technology for? So I guess you’ve answered the question I have next for you, which is, if companies keep using AI as the explanation for layoffs—I mean, it’s truly extraordinary what you quote from Dorsey in particular—where he blames AI effectively, even when it’s not the full story. Do you think that risks creating a broader backlash against AI inside organizations? Could the messaging itself end up making AI adoption harder?
Shel: I think so. As I mentioned, I think employees are not going to be tripping over themselves with enthusiasm to get this all working. It’s like training your own replacement. But I also think there’s the risk of alienating customers. Investors are one thing, and analysts, that’s one thing. But customers who sympathize with employees or see this callous disregard for the welfare of employees may look for companies that are taking a more humanistic approach to all of this, even as they’re implementing AI, looking for ways for AI to partner with employees. I’ve always been kind of surprised that organizations—maybe I’m not so surprised—that organizations see this as a way to continue doing exactly what you’re doing now with fewer people as opposed to adding staff without having to hire more people in order to do more than what you’re doing now, in order to produce more, in order to innovate more. It seems to me that what Wall Street rewards is growth. And if you maintain your head count and really seriously look at the adoption of AI as a way to grow the company, you’re going to grow by leaps and bounds.
Shel: And it seems what most organizations are happy doing is what we’re doing now with fewer people. I don’t understand how that is something that Wall Street would want to reward beyond the fact that they’ve always rewarded layoffs.
Neville: Yeah, yeah. So I think—to me, communicators are being placed in an ethical bind, almost an impossible situation. They sit between, in this case, executive messaging, employee experience, public scrutiny. And when those perspectives diverge, which is clearly what’s happening in some of these organizations, the communicator becomes the person responsible for navigating the ethical tension. I wouldn’t want a job in a company like that, I have to say, if I was the communicator.
Shel: I think it’s gotten a little easier simply by virtue of the fact that AI washing is now a recognized thing. As you noted, there’s a Wikipedia page on it. There are articles now on it. And I think it’s easy to put data together on this and take it to leadership and say, is this how you want to be positioned? Is this how you want to be perceived? This is what’s going to happen if you pursue this policy, if you pursue this course.
Shel: And I think that’s an argument that’s easier to make than something nebulous like employees are going to reject this, and we might get caught down the road when people look at what’s actually going on in our books.
Neville: So clearly that didn’t happen in Jack Dorsey’s company then.
Shel: No, I don’t know that AI washing was as well recognized.
Neville: Well, no, I mean, a communicator taking findings to senior management saying, “You sure you want to do this?” I guess that didn’t happen. Or maybe they haven’t got a communicator.
Shel: Well, maybe they don’t, or maybe the communicators are just joined at the hip with Dorsey and the leadership team.
Neville: It’s possible. So what about Oracle? You mentioned Oracle. They’ve got to lay off thousands of people. They’ve got a cash crunch from the massive data center expansion effort. Something else to add to the mix, I suppose. Did they succeed in buying the movie studio and CBS and CNN, all that stuff being wrapped up?
Shel: Well, that’s Oracle’s—that’s Larry Ellison’s son. The founder—his son, David, is with Skydance, which is the company he owns. So it’s just a familial connection. It’s not something Oracle’s actually investing any money in. But here’s my question. If you’re cutting thousands of jobs in order to have more cash available to spend on data center expansion, which, by the way, is facing immense resistance now in the U.S.—it’s going to be incredibly hard to get the permits to build new data centers, given the public blowback on this. But even if they could, what did those thousands of people do for a living? I imagine they did customer support. I imagine they did development of Oracle’s database products and cloud products.
Shel: And who’s going to be doing that now? I would expect with that many jobs being cut, you’re going to see a degrading in customer service and subsequently customer satisfaction. And I don’t understand how that serves Oracle, which is not going to be back in a positive cash flow for five years. So I tend to think that this is a really stupid decision. You should be doing what the AI labs are doing and going out and finding new investors to support this expansion if you think it’s going to be worth all that, as opposed to cutting the jobs of the people who do the work that your customers of today rely on.
Neville: So what Oracle will probably do, though, is you’ll be talking to an AI when you phone customer support. And you’re probably doing that anyway. But this will increase exponentially. Technology is improving all the time. And I think many people won’t object to talking to an AI if it doesn’t act like what we think AIs act like in that kind of role, if it acts more human-like. So it’s an upside-down time.
Shel: No doubt. Yeah.
Neville: I think to me the issue that bothers me is how people dress this up. People in positions of leadership in companies—they should know better, and maybe they do know better, but they’re being pressured, either self-pressured or by the circumstances of their roles and the kind of company they work for, to deliver the results that those above them are demanding. And so they are party to this kind of contract, it seems to me. And yet, isn’t it inevitable that this is going to happen and we’re going to see more and more of it? What do you reckon?
Shel: I imagine that we are, because leaders see other leaders and other companies doing it. And they see Wall Street, at least for now, rewarding it. And they’re going, hey, we could do that. Doesn’t make it right. Doesn’t mean it’s the long-term best answer for the organization. And I think ultimately—we talk about trust in just about every episode at some level—and this is going to erode trust. It’s going to erode trust among your employees. It’s going to erode trust among your customers. And at some level, you risk being caught AI washing.
Neville: Not good.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #504: When Companies Blame Layoffs on AI — and Leave Communicators Holding the Bag appeared first on FIR Podcast Network.
10 March 2026, 11:42 pm - 17 minutes 20 secondsFIR #503: When Your Boss Throws You Under the Bus
The president of the International Olympic Committee didn’t have an answer to a question posed to her at a press conference on the final day of the 2026 Winter Olympics. Or to another question. Or to yet another. Ultimately, she suggested, on camera, that someone on her communications team should be fired. In this short midweek FIR episode, Shel and Neville look at the fallout, what both the president and the head of communications might have done differently, and the possible long-term consequences.
Links from this episode
- IOC president condemned for public attack on comms team
- LinkedIn Post from Jasred Meade, MPS, APR, MPRCA
- Olympics boss Kirsty Coventry threatens to fire team mid-press conference in awkward moment | LinkedIn Post
- Olympics boss Kirsty Coventry threatens to fire team mid-press conference in awkward moment | Yahoo Sports
- DW News (@dwnews) on X
- Kirsty Conventry profile on LinkedIn
- Mark Adams profile on LinkedIn
- Sky News report on YouTube
- Kirsty Coventry earns praise following first Olympics as IOC president
The next monthly, long-form episode of FIR will drop on Monday, March 23.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript:
Shel Holtz: Hi, everybody, and welcome to episode number 503 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. Something happened at the Winter Olympics last month that set off a fierce reaction across the communication profession and it wasn’t about sport. During the final daily press conference on the 20th of February, IOC president Kirsty Coventry was asked a series of geopolitical questions. Questions about Russia and doping.
Comments linked to Germany and 2036, questions about senior sporting figures engaging in wider political activity. On more than one occasion, she said she wasn’t aware of the issue and visibly looked towards her communication team. At one point, she went further and suggested that perhaps someone should be dismissed. That’s the moment that shifted this from a routine press conference stumble into something much bigger. We’ll explore it right after this.
What makes this especially interesting is the context. A few days after the press conference, Coventry had been widely praised for her leadership at the Milan Cortina Games. Reporting from the AP on the 23rd of February described her first Olympics as IOC president as having good overall success, noting the intense political pressure she faced and the way she engaged directly with athletes during the Ukraine controversy. That controversy centered on Ukraine’s skeleton racer, Wladyslaw Hraskiewicz, who competed wearing a helmet memorializing athletes and coaches killed in the Russian invasion of Ukraine. The gesture drew scrutiny and diplomatic tension around whether it breached Olympic neutrality rules. Coventry chose to meet him face to face at the track and later became visibly emotional when discussing the issue with international media. That moment was widely interpreted as defining her emerging leadership style: empathetic, athlete-facing, and willing to engage directly.
The games were even described as giving a taste of tougher challenges ahead as the IOC looks towards Los Angeles 2028. In other words, this wasn’t a presidency in crisis. There was goodwill, momentum, a sense of forward motion. And then one live moment reframed the entire narrative. Being caught off guard isn’t unusual. No leader can know everything. No briefing pack can anticipate every question.
But that’s not the story. The story is what you do in that moment. Do you acknowledge the gap and commit to follow up? Do you bridge to principle? Do you calmly say, I’ll get back to you once I’ve reviewed the details? Or do you turn publicly and imply that your team has failed you? The communication reaction was swift and pointed. LinkedIn filled up with variations of the same message. Accountability sits with the principal. Praise in public, criticize in private. You can’t outsource responsibility.
But I think there’s a deeper discussion here. Yes, leaders must own the podium. Yes, public blame undermines trust. But this also raises questions about executive readiness, about the contract between leadership and communication, and about how fragile reputational capital really is. Those geopolitical questions were not obscure. They were predictable fault lines around an organization operating in an intensely political global environment. Were holding lines prepared? We don’t know. Was she fully briefed? Possibly. Did she ignore it? Also possible. And that’s where this moves beyond a single awkward exchange.
In high-performing organizations, the relationship between a leader and their communication team is built on shared risk. The team prepares the ground, the leader absorbs the pressure. If something goes wrong, it’s owned collectively and dealt with internally. The world stage doesn’t create dysfunction, it amplifies it. So rather than pile on, I think this is worth examining as a case study.
Here’s what intrigues me. This wasn’t a leader already in trouble. She had just been praised for navigating intense political pressure, engaging directly with athletes, and projecting empathy and maturity in a complex environment. There was goodwill in the bank. And yet one live moment—a few sentences, a glance towards her team, a suggestion someone might be dismissed—reframed the entire narrative. That tells us something about how fragile leadership capital really is.
So, Shel, let me start here. When a leader appears unprepared on a global stage like that, who actually owns the failure? Is it primarily the principal? Is it the communication team? Or is it a breakdown in that relationship we often describe as the unwritten contract between leader and comms? And perhaps even more provocatively, at what point does a communication team have a responsibility to push back and say, you’re not ready for this podium?
Once a story becomes internal blame rather than the issue itself, you’re no longer managing the moment. The moment is managing you. So what do you make of all this, Shel?
Shel Holtz: Well, I think it’s a two-way street. I think both sides failed here. Coventry herself is the IOC president, has been for nearly a year. She should have been aware of these issues from a governance standpoint. It’s not a question just of media prep.
Neville Hobson: Mm-hmm.
Shel Holtz: As one commentator put it, it’s not the PR team’s job to inform the president of things she should know simply from a management perspective. So I don’t think there’s a problem with piling on here a little bit, but throwing your team under the bus publicly is not the approach to take. I think there are some lessons that I hope Coventry learns here. She turned what should have been a really unremarkable closing press conference into a global story about dysfunction at the IOC. The press conference actually became the story and that’s the exact opposite of what any comms professional looks to achieve with this type of press conference.
The right move from Coventry would have been to acknowledge the question, note that she’d want to look into it, and then commit to following up. That buys time for her without revealing this gap between what she knows and what she should know. And she could have gone behind closed doors afterwards and she and Mark Adams, the guy who’s in charge of the communications team, could have had whatever conversation she wanted to about briefing protocols. But when a leader publicly humiliates their comms team, it poisons that relationship and makes future counsel less likely—the exact opposite of what the communication requires.
Neville Hobson: Yeah, I agree. I mean, there’s lots—and everyone with an opinion has been doing it on LinkedIn in particular. PRWeek had a really good assessment, which is where a lot of this kicked off. But what you’ve outlined is what she should have done, basically. And I totally agree. I think an additional comment I’d add to that is demonstrating in a sense the executive ownership of the issue overall. She could have said something like, you know, ultimately the responsibility sits with me. That would have dampened down anything, would have changed the tone of the entire story. She didn’t do that.
But there’s also, I think, worth pointing out what the PR team should have done. And maybe they did do it. Let’s add that caveat. We don’t actually know who did or didn’t do what.
Shel Holtz: She may have not read a briefing book that was given to her, right? That’s exactly right.
Neville Hobson: Or she may or she may not have been given one. Now, that’s the other element. We don’t know. So this conversation therefore gets more interesting if we exempt from that point of view.
So the issues raised weren’t obscure. And I agree with you that the geopolitics of it all is actually in the kind of daily news. If she reads newspapers she would have seen a lot of this discussion that would have been kind of an alert to her. So the issues were not obscure. Russia and doping, geopolitical symbolism of 2036 Germany—including one of the questions she got: why was the IOC merchandise website selling t-shirts with emblems of the 1936 games in Nazi Germany? And she said, I wasn’t aware of that kind of thing. Infantino and Trump—that’s a dynamic between the president of FIFA and Trump. Predictable lines of questioning.
Shel Holtz: Okay.
Neville Hobson: A robust prep document—what might that have looked like? Well, likely hostile questions. Again, briefing her on the kind of questions she might get. Top-line holding statements. Thirty-second bridges. “If you don’t know” language. If that didn’t exist, that’s a team failure. If it did exist and she ignored it, that’s a leadership failure.
Shel Holtz: Yeah, well, she said, “I was not aware” on three separate occasions in one press conference. I can’t remember ever hearing about anything like that before. And every time she said it, it compounded the damage from the last one.
Neville Hobson: Yeah, she did.
Shel Holtz: And even if she wasn’t briefed, a seasoned executive would have bridged to what she could say: the IOC’s position on political neutrality, their commitment to anti-doping integrity, the process for evaluating future host city bids. She could have leaned on what she did know and then offered to get back to people with more specific answers later, but she just kept revealing what she didn’t know. This is a textbook case for why pre-briefing documents and Q&A anticipation matter and what you would expect from your comms teams. And before any high-profile press event, they should have—and again, we don’t know whether she was or not—but she should have gotten a briefing book that covered not just what you want to say, but what you’re likely going to be asked, with a—
Neville Hobson: Precisely.
Shel Holtz: With Germany 2036 on the centenary of the Nazi games, a sitting IOC member appearing at a Trump political event, and an NYT investigation into Russian doping. These are all foreseeable questions during a closing Olympic press conference. You know, I don’t think that Mark Adams gets to skate here. He’s a 17-year veteran of the IOC. He used to work at the BBC, ITN, and Euronews and the World Economic Forum. He’s earning 420,000 pounds a year for this job. When the Germany 2036 question came up, his response was simply that he hadn’t seen it either. And I’ve got to tell you, for someone at that level and that salary during the final press conference of the Olympic Games, I think it’s an understatement to call that a significant lapse. The media monitoring function alone should have flagged those issues.
Neville Hobson: Yeah, I agree. I mean, there’s a ton of questions I’ve got here that might be rhetorical now, actually. But nevertheless, let me rattle these off and see what you think. Can a comms team ever fully protect an unprepared leader—that’s one. Where does responsibility truly sit? And that’s something that could occupy the rest of this podcast discussing that one alone.
But that’s a question that I wonder: is this part of a broader trend? I mean, some people—notably on LinkedIn, so let’s just put that out there—have hinted, if not explicitly noted, the increase in executive blame-shifting, diminishing personal accountability, and a culture of scapegoating communication. Is that anecdotal or systemic? That’s the kind of rhetorical question, I suppose.
Should comms professionals refuse to front leaders who are not ready? It takes a brave person to do that, and maybe Mark Adams isn’t that person, I don’t know, but that’s pretty provocative. Is there a professional duty to push back from the comms people? At what point do you say you’re not ready to do this live? Is this a case study in leadership under geopolitical complexity? The Olympics isn’t sport alone—it’s politics, it’s war, it’s symbolism, it’s national legitimacy. A modern IOC president must be politically literate at the highest level.
So there’s lots there. I guess you could summarize it, I suppose, in the sense: when a leader is caught off guard on the world stage, who owns the failure? Because let’s just go back to what actually happened. She was caught off guard—not once, twice, three times at least. And one of those three times, the last one, is when the bus emerged under which she threw the PR team by saying someone needs to be dismissed.
So when a leader is caught off guard on the world stage, who owns the failure—the principal or the communication team? Question.
Shel Holtz: Well, I think you can look at it both ways here. I think people who are looking to shift that blame to the PR team need to recognize that it’s not like she had no experience. She has governance experience. She chaired the IOC Athletes Commission. She served on the executive board. She held a ministerial portfolio—
Neville Hobson: Yep.
Shel Holtz: —in Zimbabwe. But this suggests that she hasn’t either fully adapted to the demands of the presidency or her team hasn’t adequately supported the transition. But they need to get on the same page because I think one of the bits of fallout on this is questions about the IOC’s ability to handle the bigger issues that are coming up in the LA 2028 summer games.
Neville Hobson: Mm-hmm.
Shel Holtz: They’re going to be exponentially more complex politically. And if the team can’t handle media monitoring and an executive briefing during a winter games, how are they going to manage the geopolitical minefield of an Olympics in Trump’s America? Adams has already been linked to potentially leaving the IOC for a role with UK Prime Minister Keir Starmer. He was one of Starmer’s best men at his wedding. So there’s another layer of instability, which I guess means if she needs to fire someone, he’d be a good candidate.
Neville Hobson: Yeah, there’d be a vacancy there, wouldn’t there? So, I mean, some of the comments on one of the many LinkedIn posts I saw do talk about—let’s call it a possible deeper misalignment between leadership and communication at the IOC. Questions people are speculating—because this is all speculation, I would hasten to add. Did this show that there was a pre-existing tension between her and the comms team?
Shel Holtz: Yeah.
Neville Hobson: I mean, I watched the video of her being asked those questions and there was no hesitation in her glance to the comms team where they were sitting, I guess, to say, I wasn’t aware of this. And she did it again. And then the third time it was, someone needs to be dismissed here. So was there some kind of tension? Did the team try to brief her and just get ignored? Is this a case of leader-comms misalignment long in the making? I mean, these are all unknowns. I’d like to think not.
She’d only been in the job a year. She had got all this praise because of how she had handled all these other things going on. That doesn’t mean therefore that this is not right. Something happened clearly, and we witnessed the kind of jaw-dropping moments when she said “I wasn’t aware of this” three times and basically said someone should be fired. So overall the tone is not good. The optics are dreadful.
I’ve not seen any further reporting on this since the initial flurry. It’s all kind of—
Shel Holtz: Well, you know, if your executive gets surprised at a press conference, I think that’s a process failure that can be fixed. But if your executive blames you for it on camera, I think that’s a leadership failure that may not be fixable. You know, the relationship between a communications professional and their principal depends on mutual trust, honest counsel, understanding that you protect each other publicly and hold each other accountable privately. And that’s the opposite of what happened here. So I don’t know whether there was tension before this happened or not, but there is certainly tension now and I’m not sure it can be repaired. And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #503: When Your Boss Throws You Under the Bus appeared first on FIR Podcast Network.
2 March 2026, 6:58 pm - More Episodes? Get the App