- 34 minutes 10 secondsFIR #513: Why Communications Must Build the Narrative Code for the Agentic Age
Neville and Shel dig into a provocative Harvard Business Review article that argues most marketing teams are structurally unprepared for the speed and scale that agentic AI now enables. The bottleneck, the authors contend, isn’t the technology; it’s the operating model. Neville and Shel connect the piece to conversations FIR has been having for the past year: AI as orchestration rather than automation, professionals shifting from supervisors of tasks to directors of systems, and 2026 increasingly framed as “the year of the agent.”
At the center of the Harvard piece is the idea of a “brand code” — a machine-readable knowledge system that lets specialized AI agents continuously create, adapt, test, and optimize marketing in real time. Communications urgently needs its own equivalent: a “narrative code” containing executive voice profiles, message hierarchies, sensitive-topic guardrails, and escalation rules. Whoever builds it first, he warns, will inherit the agentic stack, and if marketing gets there first, comms will be stuck with a system never designed for crisis, controversy, or stakeholder complexity. The episode also includes some concrete examples and early thoughts on Hermes, Wispr Flow, and where human judgment still has to win.
Links from this episode:
- Redesigning Your Marketing Organization for the Agentic Age
- The Year of the Agent: What it means for the future of communications
- Google Summary: The Year of the Agent: What it means for the future of communications
- If you work in PR and you’re unsure how AI agents will help you, this should help.
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel: Hi, everybody, and welcome to episode number 513 of For Immediate Release. I’m Shel Holtz.
Neville: I’m Neville Hobson. Over the past couple of years, we’ve heard countless conversations about how AI is changing marketing and communication. Most of those discussions tend to focus on tools — faster content creation, better personalization, workflow automation, synthetic media, analytics — all the things AI can supposedly do more quickly and at greater scale than humans. A new article in Harvard Business Review published last week takes the discussion somewhere much bigger.
Its argument is not simply that AI will improve marketing productivity. Its argument is that AI may fundamentally redesign how marketing organizations themselves operate. The article is called “Redesigning Your Marketing Organization for the Agentic Age,” and the authors argue that most marketing teams are structurally unprepared for the speed and scale AI now enables. The reasoning is interesting; we’ll look into this in a minute.
AI has already accelerated software engineering and product development dramatically. Products, updates, campaigns, and features are being developed and shipped much faster than before. But marketing organizations, they argue, are still largely built around sequential workflows, siloed teams, approval chains, meetings, handoffs, and coordination-heavy processes. So even when AI speeds up individual tasks, the organization itself still moves slowly.
In other words, the bottleneck isn’t necessarily the technology, it’s the operating model. What struck me reading this article is that in many ways it feels like the continuation of conversations we’ve already been having on FIR over the past year. About a year ago, Shel demonstrated some of the early agentic AI capabilities we were beginning to see emerge — systems that could move beyond simple chatbot interactions and actually take actions across workflows, tools, and platforms.
At the time, it felt experimental, slightly futuristic, and maybe just a glimpse of where things might be heading. Since then, we’ve repeatedly returned to related themes on the podcast: AI as orchestration rather than just automation, and managers becoming directors of systems rather than supervisors of tasks, to name but two. Recently, the wider communications industry has been framing 2026 as the year of the agent, a fundamental shift from generative AI, which creates content based on prompts, to agentic AI, which acts autonomously to achieve long-term goals. The rise of such autonomous agents requires a focus on agentic orchestration, with professionals acting as AI engineers who guide, manage, and audit these digital employees. As we discussed on this podcast last year, communication departments will adopt a hybrid structure where humans focus on high-level strategy and creativity while AI agents handle high-volume procedural communication tasks at machine speed.
We’re already seeing a marked impact on marketing and public relations. The Harvard piece explains how companies such as HubSpot and AWS have begun putting this model into practice. They say organizations are achieving measurable gains, with marketing materials adapted up to 98 times faster, unit costs reduced by 80%, and click-through rates increased up to 17 times. Research from BCG has demonstrated these benefits at scale.
Organizations embedding agentic AI into marketing workflows, the research has found, can achieve up to a threefold increase in ROI, campaign speed, and content volume. That’s why this Harvard article feels so interesting to me. It doesn’t contradict any earlier conversations; it complements them. It takes many of the ideas we’ve been discussing conceptually and places them inside a concrete organizational model. The authors propose something they call an agentic marketing organization — essentially a system where humans and AI agents work together continuously across multiple layers of activity.
At the center of this idea is what they describe as a brand code: a machine-readable knowledge system containing brand strategy, customer insights, messaging frameworks, business rules, governance structures, and operational guidance that both people and AI systems can understand and act upon. Once that foundation exists, specialized AI agents can continuously create, adapt, test, distribute, optimize, and report on marketing activity in real time. It’s a vision of marketing that starts to look less like a department and more like an operating system.
But what really caught my attention wasn’t the technology itself so much; it was the shift in the role of the marketer. Because beneath all the platform architecture and workflow diagrams is a much deeper question: if AI increasingly handles execution, what becomes the real value of marketers and communicators?
The article argues that value shifts away from production and toward judgment — setting intent, evaluating outputs, interpreting signals, shaping governance, and guiding how the system evolves. And that raises some fascinating questions for communicators. But first, Shel, your demo of those early agentic capabilities was about a year ago now. As I mentioned earlier, it felt experimental and slightly futuristic then. So what’s changed since then?
Shel: It feels like ancient history now. If I were to look at that, I’d probably shake my head and say, “my God, that’s pretty primitive.” The way it worked was, it took a screenshot of every site it visited and then acted on the screenshot. So it was a very slow and tedious process. The video that I shared, I edited out all of the waiting time for it to go through all of this, because it showed you everything. And those days are long gone.
That was clearly a demo. I don’t remember which of the AI models offered that — I think it was Anthropic — but it was just tedious and not all that functional. It did what it was supposed to do in the end, which was to create a spreadsheet with the information I’d asked for. It was some open-source spreadsheet that it used.
I ran a similar exercise just last week using Claude Cowork. And this was for a piece somebody in our sustainability department wrote. It was about two projects that had achieved world-first certifications for zero waste, which is kind of a big deal in the construction industry. It’s one of the biggest contributors to landfills and the like, the industry is.
So I’m looking to place this article. And what I did was, I told Claude Cowork that I wanted four subagents working: one to look at construction and AEC publications — that’s architecture, engineering, and construction; AEC is the category for the industry. Another one was going to look at sustainability publications. And there was one other, but I also had it look for podcasts where the authors of this report might be invited for an interview.
I said, what I want you to do is find the publications and podcasts based on their previous content that are most likely to be interested in something like this, and then create a spreadsheet with the name of the outlet. And of course, divide it into these categories — right? AEC, podcasts, sustainability-focused publications, and the like. Mainstream media was the other category. But I also wanted the URL, I wanted the name of the appropriate person to pitch the article to. And then, based on what that person has written — that particular reporter or editor — I wanted a pitch that was personalized to that person.
And I came back in about half an hour, and there was a spreadsheet ready to go. And I had started acting on it. I don’t copy and paste the pitches; I go and take a look at that reporter’s writing and review the pitch and then make some tweaks to it. But my God, can you imagine how much time that would have taken for me to go out and do this on my own by way of research? That would have been hours and hours.
And instead the agents went out and did it, and then Cowork assembled all that information into a spreadsheet. I was doing other stuff while it was doing that. I wasn’t sitting and watching, because there frankly wasn’t that much to watch. I mean, you could watch the agent tell you, “now I’m going to go look at this.” But, you know, that’s kind of boring. Let it do its thing.
Neville: Yeah. So a question I have related to this, I suppose, is to put it into one practical area, which is: people might think of this in the context of the interaction you have with prompts and the old-fashioned way of doing things that is still prevalent. So how did you — the agents went off and did their thing, and then you came across what they produced and so forth, and it saved tons of time — how did you gain confidence, let’s say, that it was accurate, that there were no hallucinations, no errors? Or is that not the issue anymore with this kind of development?
Shel: I believe that hallucinations would still be an issue. It’s still a model at some level doing this work. I mean, it’s Claude with Claude Cowork. I did install Hermes over the weekend. We’ll talk about that in a bit, but it’s an agent platform, an agent framework, and you create the agents to do things.
For example, I created one over the weekend that I set up to be a weekly job, and it’s going to go out and look at construction industry news to find things based on our areas of expertise where I work, where we have subject matter experts and thought leaders, to find the top three articles that are ripe for newsjacking. If you remember David Meerman Scott’s newsjacking — things where we can get some stuff out there quickly.
Neville: Yeah.
Shel: And take advantage of the fact that this is something that people are looking at and gain some traction over it. So every Monday at eight, it’s going to run this job, and by 8:30, 8:45, it’s going to give me the results. And all of this is through Telegram, or WhatsApp, or whatever app you choose to use to interact with the bot. It still starts with a prompt. The difference is that you’re not prompting a question in order to get an answer; you are telling it what task to perform.
And in the case of the one that I set up on Hermes, it’s now a weekly task. And the interesting thing about Hermes is that it learns as it goes. It continually self-improves based on the more it knows about you and the kinds of tasks that you’re asking it to perform. So I’m looking forward to seeing how that goes. But so far, I just have the one agent running there. But it’s still a prompt at the end of the day.
And in fact, I used — I think it was Gemini — to help me craft the prompt to get the best results I could. I said, here’s the list of requirements, turn it into the best prompt that Hermes will understand and act on most effectively. And it did that. It did a great job. And I’m very satisfied with the results so far. I ran one test of it, so I liked it.
Neville: Yeah. So Claude Cowork is kind of at the heart of this. I’m experimenting myself with Claude Cowork — with Claude generally, Cowork sort of. Nothing like you’re doing with this, I hasten to add. But one of the things that I’m very impressed with about Claude is the way in which you tell it the things about you, who you are and what you’re doing, all this stuff — your preferences in how it conducts what you’re asking it to do — in a way that, unlike ChatGPT for instance, where you have to, in a sense, include in a prompt stuff you’ve already told it for something previously, but you’ve got to do that again. It doesn’t kind of remember that in the same way. Claude is different, though.
So your setup — I mean, I guess what I’m asking basically is, when you set this up, did it require that level of preparation that is probably desirable to do that? Or was there anything special that you had to do that was outside of what you would normally do with Claude Cowork?
Shel: Well, for the byline piece that I was looking to pitch, that I set the subagents out to do their thing in Cowork, I did in the prompt explain what my goal was and what the organization was. I had it look at our company website to get a good sense of who we are and what our areas of specialization are. I gave it some additional information.
But then something I do with all of these now — not every prompt, if I’m just in Claude or ChatGPT, but especially with the agents, with deep research projects and things like that — I’ll say, “ask me questions before you go out and do this.” And it usually asks some very salient questions. It’s very good at deducing what it doesn’t know. And the answers factor into the results you get, which is really interesting to me — that it can, if you ask it to, understand where there are gaps in the prompt that it could use this information in order to deliver really excellent and pertinent results.
Neville: Got it. So thinking about our listeners listening to this, to how you’ve explained all of this — is it kind of credible and within the reach of anyone literally wanting to do this? Or do you need to have some kind of mental preparedness or knowledge technically to do this? Could anyone just dive in and start something? Right.
Shel: Well, I don’t know about diving in. With Hermes, for example, I watched a couple of YouTube videos. I watched one that actually walked me step by step through the installation process and then had a whole section on use cases. I’ve watched more. There’s one on 99 use cases for Hermes that I watched, which was pretty good. So it helps you get in that mindset. But in terms of, can anybody do this?
In the world of communications, anybody better be able to do this, because you’re not going to be sent out to look for these sites and assemble a spreadsheet anymore. You need to be able to orchestrate these agents. And that means knowing how to prompt it to get the results that you want. And that’s different, again, from prompting ChatGPT for an answer to a question, right? You are giving it a task, and it could be a recurring task that somebody on your team does.
Now, in communications, I still don’t see this replacing a communicator, because every communicator is going to have the human-only or human-required elements of the job. I cannot see one of these conducting, say, an employee focus group. There’s so much that we do. I mean, you know, in public relations, the word “relations” always stands out to me, and maintaining those relations is not something a bot can do.
But in terms of what that Harvard Business Review article was talking about, you can swap marketing for communications. I think it’s more true in comms. Comms workflows are more coordination-heavy than marketing. We have legal, we have HR, we have the C-suite. We have to make sure everything’s consistent with the brand and maybe get some brand representation approvals. They’re the owners of the channels that we have to deal with.
If marketing needs a brand code — and this was a concept I really liked in that article — communications needs a narrative code. You know, a machine-readable positioning, machine-readable executive voice profiles, message hierarchies, sensitive-topic guardrails, rules for escalating things that emerge that need to be taken up a step in the hierarchy or maybe up to the C-suite or the CEO. I don’t know anybody who’s built a narrative code.
Whoever builds this first in your organization, by the way, is going to end up owning the agentic stack. If marketing builds it first, we in communications are going to inherit a system that wasn’t designed for crisis communication, wasn’t designed for controversy or reputation damage or stakeholder complexity — it was built for marketing. And that’s the one we’re going to end up having to work with. You probably remember, Neville, in the early days of social media, Richard Edelman was out there sounding the drum that PR needed to own social media before marketing and advertising got their hands on it, because they would turn it into something inauthentic, right? It’s the same thing here.
Neville: Yeah. Yeah.
Shel: I think we in comms are going to have to build out the narrative code and let marketing take advantage of the agentic stack that we’ve built. But we need to be in the room when those decisions are being made.
Neville: So another challenge for communicators, and I can see that. I think the overall structure of the Harvard piece, as I mentioned in the introduction, is on the organization as a whole. And I think there are examples where that’s in work — I quoted a couple, and then there’s the BCG research, which I found quite interesting. But that’s… restructuring is a way away yet on an organizational level, I would say, for most companies. But the individual actions, such as experimentation you’re doing, are definitely right in front of us, literally right now.
And it prompted a thought in my mind, looking at this overall picture, about some assumptions in the Harvard piece that I think are worth looking at for a minute, where the article assumes that strategic judgment remains human, not AI focused, but execution becomes agentic. So I think, okay, then — though history suggests automation rarely stops neatly where people would like it to and where they would expect it to.
So perhaps a question that’s relevant to address in this context is: if AI systems — agentic is part of that — increasingly assist with strategy too, which is what they’ll be doing, where exactly does human value migrate to? That’s a broad question, but for communicators specifically, how would we address that one?
Shel: I think, first of all, if you’re going to look to the agentic system to assist with the development of the strategy, I would sit down and map out a game plan for that. I wouldn’t just say, “hey, you know the company I work for, come up with a strategy for us.” I would say, first of all, what is this strategy…
Neville: Ha ha ha.
Shel: …going to be designed to achieve? What do we know about the direction the company’s going and decisions that have been made? I would certainly use it to go out and say, research the marketplace and research our competitors and identify, to the extent that you can, what their strategies are. I would develop the strategy myself, but I would give it to the AI to stress-test.
And by the way, some of this is agentic and some of it is just querying a chatbot. I mean, let’s just take crisis communication as an example. No CEO is going to go into a boardroom with an answer from an AI system telling a leader something they don’t want to hear. That is amplified by the agentic stack. If we go in as the crisis counselor and say, “look, I know you’re not going to like this. Here’s my judgment. And I’ve got this information that came from the weekly analysis of sentiment in the marketplace,” so I think it can bolster your argument. It can’t replace your argument. You’re going to walk into that boardroom as a human and make a case.
Same thing, maybe, with focus groups. When passive signals in social media, for example, and message boards get gamed, sitting in a room with 10 employees becomes the truth that the dashboards that are out there — the agents that are out there looking at sentiment — get checked against. So when a dashboard says that morale is great and the focus group says it isn’t, I’m going to pay attention to the focus group. I’m going to pay attention to those 12 people in the room before I listen to an agent that says, “well, we’ve been analyzing all the sentiment in Slack and email, and everything is just dandy.”
So I think it’s the same with strategy. I think I would never abdicate strategy…
Neville: Mm.
Shel: …but I could certainly develop it faster and be more confident in its viability by using agents and chatbots.
Neville: Yeah, I agree. And it makes me think of, I guess I would say, what’s coming, which is already here in ways that lead to even greater — well, integration, I suppose, is the right way. I’m thinking what you said at the beginning of this segment we’re talking about now, which is, you don’t hand the whole thing over to the AI and say, “hey, go and develop a strategy.” You would do…
Shel: And you know there are people who are, right?
Neville: Yeah, they will. They will. But it seems to me that this is really, in a sense, the fulfillment of an expectation — a promise — from artificial intelligence tools like this, that you would have a conversation with it in the same way you would with a human being who might be an external consultant or a colleague who’s a subject matter expert or whoever it might be, that you would explore with that individual: we’re developing a strategy for next year, let’s look at how we’re going to do this.
You set the framework for how you might start that conversation with your AI assistant. And as you said, this is not specifically agentic; it’s the whole spectrum of what the tools are. And you set it on course to go and research this. And that’s probably what an agentic tool will do. And that to me is the excitement of where this is going — that you can get to that stage, which then I think would address some of the skepticism and indeed alarm bells by some in organizations when they see unfettered technology going all over the place or being asked to do stuff. This, though, makes it credible and gives it some legs of credibility.
Which leads me, I guess, to possibly the final question here. We’re seeing this, as you’ve explained, this is light years ahead of the demo you gave a year ago, which gave a signal, a strong sense of what’s possible, where this could go. We’ve seen that fulfilled. It is eminently possible. And you don’t need to be a rocket scientist, as you might have expected you would have to be a year ago. This is doable. And the more people experiment with it in simple ways, like you’ve outlined as a real-world example, they will want to do that in that case.
So the question then, therefore, is: okay, fine, a year on from last year, you’ve explained something you’re doing that delivers value quite readily every Monday morning, let’s say. So what’s next, do you see, in terms of developing technology and the developing value people will get from it that would accelerate probably its uptake? How do you see it?
Shel: I think that the next thing we’re going to see is an evaluation of every role and where an agent will fit. This is something we went through a couple of years ago. Ethan Mollick was talking about it in his book, Co-Intelligence, before we were even talking about agents — talking about inviting AI to the table and figuring out where you could work it into your workflows. But it was still the chatbot. It was still the, “I’m going to ask you a question and you’re going to deliver some kind of answer.”
I think we need to do that again and look at agents. What tasks are we performing, and which ones can we hand off to agents? And I think there are probably roles where this is going to be even easier to do, where you’re going to see more opportunities than in communications. I mean, you know, engineering, for example, I think is wide open for this sort of thing.
So I think that’s what’s next — as we do hand off certain (and I’m going to call them) mundane tasks, because this is not the high-level strategy and the human-touch stuff that is so important in so many jobs. But as we hand these off, and it now takes an hour instead of a week, what does that do to the rest of our workflows? What does that do to our organizational structure?
One of the things that I was reading over the weekend was the expectation that middle managers are going to be a thing of the past, because what do they do? They handle the flow of information up and down between the people who report to them and the people that they report to. They handle a lot of mundane tasks that might now be handed off to an agent. Agents, according to — I don’t remember who this was who was saying this. It was somebody noteworthy. It might’ve been Dario Amodei at Anthropic, but I honestly don’t remember for sure — but middle managers can be replaced by agents by and large.
So what does that do to organizational structure? Certainly flattens it. But now, in terms of those executives who have a lot of people reporting to them, what part of that reporting structure can be handed off to an agent? So I think this is sort of a cascading situation where everything we do leads to a reconsideration of something, that leads to, well, what else can we do with the agents, which leads to further reconfiguration?
I think that’s what we’re looking at. And I don’t think it’s going to happen overnight, because, as you alluded to, the technology may be moving fast, but organizations tend not to, particularly when it comes to issues of structure and governance.
Neville: I think this is so exciting, to be frank — the idea of the changes we can see coming that will be painful for many. But is it more structural change? It’s a constant in our lives, is it not, with all of this? Something we should embrace emotionally and logically, that we can control this. And I don’t mean control the tech — we can’t do that. But we can control the risk and the benefits of something like this by not reacting to something that’s coming, by, in a sense, embracing it and experimenting with this and learning it. And as you said, if we don’t do this, the marketing guys will. And so we can’t have that. I think…
Shel: And then we’re stuck with theirs.
Neville: I think it’s something to really pay attention to. So this has been a useful, interesting discussion, Shel, getting your thoughts on this in particular. So yeah, I think we’ll come back to this conversation unquestionably at some point in the future.
Shel: No doubt, as we see developments. In fact, as I say, I just started working with Hermes over the weekend, and it was an eye-opener, and I expect, as I work with it more, I’ll have more thoughts about it and my thinking will evolve. I should point out that I did install this on a personal virtual server, not on a company computer. I’m not taking that kind of risk. And it’s my personal account.
One other thing I thought I’d mention — you talked about the idea of having a conversation with the AI, and I think that’s becoming more of a focus. And I’ll give you two quick examples. One I already mentioned is with Hermes: you don’t go to a terminal and engage with it or go to its website. You do this through WhatsApp or Slack or, in my case, I’m using Telegram — just like I’d be having a conversation with a person in that same app.
But on, I think it was Thursday, I did a half-day webinar that was offered by the Marketing AI Institute, Paul Roetzer’s organization, and it was on AI for writing. And it was very interesting. Chris Penn was among the speakers; he did a great job, as always. But one of the folks there talked about, you know, have the conversation with AI for real — do it with your voice, not with your keyboard. And she talked about a tool, which I haven’t used it yet — I have installed it across my personal computer, my laptop, and my phone — called Wispr Flow. It’s an AI tool. Have you…? It’s pretty cool. I mean, in any tool you’re using, you just click it and talk. And it doesn’t go directly into the chat box; it interprets it…
Neville: Yeah, I’ve been using it. Yeah. Yeah.
Shel: …and then puts the best prompt based on what you just said into the box. And that’s what you use to prompt the model. And I’m looking forward to giving that a try. And it’s called Wispr Flow, by the way, because if you’re in the office in an open-space format and you don’t want to disturb the people next to you, it understands what you’re saying when you whisper to it.
Neville: Yeah, it is interesting. I’ve got a hurdle to jump with it, though, which is getting accustomed to speaking what I want things to be done and how, rather than typing them. You know… yeah, and I haven’t got across that hurdle yet. That’s limiting my use of it. So I’m reverting to the, well, I’m more comfortable typing, I can type fast and all that kind of stuff. But, reality, this is faster than that. And it is…
Shel: Yeah, same.
Neville: I recognize the benefits of it. I can see this. Not everyone will be used to this. This is not dissimilar to the argument we could have about voice notes. I know people who love voice notes; I don’t. And I know more people who don’t like it. It could be a generational thing, I think to myself. But it’s part of the communication landscape. So you need to get accustomed to these developments.
Shel: Yeah. And I hear about voice notes being preferred by some reporters who are being pitched, because it’s evidence that it wasn’t AI slop that’s pitching them.
Neville: Yeah, yeah, yeah. Yep, yep, yep.
Shel: And that’ll be a 30 for this episode of For Immediate Release.
The post FIR #513: Why Communications Must Build the Narrative Code for the Agentic Age appeared first on FIR Podcast Network.
11 May 2026, 7:41 pm - 19 minutes 6 secondsALP 304: Stop making sacrifices your agency doesn’t need you to make
Most agency owners think they’re doing their team a favor when they quietly absorb the painful, tedious, or time-consuming work. They’re likely not. In this episode, Chip Griffin and Gini Dietrich look at the sacrifices owners make on behalf of their teams and why those sacrifices often create more problems than they solve.
This isn’t about the occasional tactical sacrifice, it’s about the systemic ones: the conscious decisions to absorb entire categories of work because you’ve decided your team would find them too difficult, too unpleasant, or too much of a burden. Gini admits she’s guilty of it herself, sharing that a new COO sat her down with a list of tasks she’d been handling and told her she shouldn’t be doing any of them. The jobs weren’t glamorous, but they weren’t the owner’s job either.
Chip extends this into two areas where owner sacrifice tends to do the most damage: new business development, where owners keep proposals and pitches entirely to themselves thinking they’re protecting team time, and org chart design, where flat structures are usually not a deliberate choice but the result of owners absorbing management responsibilities no one else wanted. Both patterns block team growth and overload the owner at the same time.
Gini describes a practice she returns to every quarter, sorting her task list into three buckets — things only she can do, things she enjoys but probably doesn’t need to do, and things she absolutely should not be doing. The third list gets delegated immediately. Chip puts it like this: for everything on your plate, ask yourself why you are the one doing it. If there isn’t a good answer, stop doing it. [read the transcript]
The post ALP 304: Stop making sacrifices your agency doesn’t need you to make appeared first on FIR Podcast Network.
11 May 2026, 1:00 pm - Collaboration, Cohesion, Community: Panel Will Explore Three Points of the New Communication Compass
In the second of our special Circle of Fellows discussions, Brad Whitworth will moderate a panel to discuss the remaining three chapters of the book, “The 7 Cs of the New Communication Compass.” The book’s author and editor, Dianne Chase, will join Brad, along with the IABC Fellows who authored the three chapters:
- Zora Artis, who wrote the “Cohesion” chapter
- Cindy Schmieg, author of the “Collaboration” chapter
- Shel Holtz, who penned the “Community” chapter
The panel will be live-streamed at 5 pm EDT on Thursday, May 28. Participants in the live stream can ask questions and share comments, observations, and experiences, and become part of the discussion. If you’re not able to join us, you can listen to the audio podcast later or watch the YouTube replay.
About the panel
Dianne Chase helps organizations and leaders harness the power of strategic communication to navigate crises, build trust, and drive positive change. With over two decades of experience in journalism and corporate communications, Dianne has developed a unique approach for training and consulting clients that combines crisis management expertise with the art and science of business storytelling. Dianne is an award-winning media, journalism, and strategic communication professional with profound expertise in communication disciplines, most notably crisis communication, issues and reputation management, media training, and executive communication. She is one of two people in the world accredited in the powerful GENIUS Business Storytelling methodology, created by international communications thought leader, Gabrielle Dolan. She is former chair of the International Association of Business Communicators, and author/editor of The 7 Cs of The New Communication Compass.
Although Zora Artis began her career outside the communications field, she has had an outsize impact on the profession since entering it more than 20 years ago to as an account director and then strategic planner with branding and integrated marcomms agencies. Since then, she has led her own brand and communications consultancy and served as CEO of a 20-person creative, digital, and strategic communication firm. In 2019, formed her current management consulting practice bringing together strategic alignment, brand, and communication expertise. She has received five Gold Quill awards. Her significant contributions to the profession and the body of knowledge include her original research with IABC colleague, Wayne Aspland, on strategic alignment, the role of communications and leadership – the first substantial research effort for the reconfigured IABC Foundation – and co-authoring a subsequent white paper, “The Road to Alignment,” supported by 27 senior communicators from five continents. Zora has also researched the correlation between strategic alignment and experiences and the impact on stakeholder value and brand. This has led her to develop her own proprietary Alignment Experience Framework. She has also examined gender equity, perceptions, and bias in organizations, and wrote a chapter on this topic for the Quadriga University e-reader, Women in PR. Since joining IABC a decade ago, she has impacted IABC as a volunteer, including roles as chair of the IABC Asia Pacific Region and IEB director; she currently serves as the chair of the 2022 World Conference Program Advisory Committee. A certified company director, as chair of the IABC Audit and Risk Committee she introduced proper risk oversight to the board’s processes. Zora has been honored with the 2021 and the 2015 IABC Chair’s Award for Leadership and was named IABC’s 2020 Regional Leader of the Year. She is also a Strategic Communication Management Professional, Fellow of the Australian Marketing Institute, and Certified Practising Marketer.
Cindy Schmieg is an award-winning strategic communicator. Her 30+ years of corporate, agency, and consulting experience focuses on making the communications function strategic within an organization. Cindy now teaches online in the Communications Master Degree program at Southern New Hampshire. She has served in many IABC leadership roles and is today a member of the IABC Audit/Risk Committee and Pacific Plains Region Silver Quill Award Committee, as well as assisting on the IABC Minnesota Annual Convergence Summit.
Shel Holtz, SCMP, ABC, is senior director of Communications at Webcor, a commercial general contractor and builder based in San Francisco. He is a member of the Global Communication Certification Council and will become vice chair of the Council in June 2026. Shel has written six communication-themed books, and his seventh, “On the Same Page,” a practical framework for implementing internal communication strategies, will be published later this year. He co-hosts the 21-year-old communication-focused podcast, “For Immediat Release.” Shel served for six years on IABC’s executive board and has also been president of the IABC Los Angeles chapter, along with other IABC roles. He has led communications at two Fortune 400 companies and had his own consultancy for more than 21 years before joining Webcor in 2017.
Brad Whitworth, ABC, SCMP, IABC Fellow, is a pre-eminent thought leader, lecturer, and author in organizational communication. He has led global internal and executive communication programs at HP, Cisco, Hitachi, PeopleSoft, AAA, and MicroFocus. He holds an MBA from Santa Clara University and undergraduate degrees in journalism and speech from the University of Missouri. Brad lives in California, a wine country, and he grows Pinot Noir on his property. A former broadcaster, Brad has made more than 300 presentations to executives, communicators, and university classes worldwide. Brad is a past board chairman of the International Association of Business Communicators and a Fellow of the association. He is one of the authors of The IABC Handbook of Organizational Communication and the new IABC Guide for Practical Business Communication: A Global Standard Primer. He chaired the Global Communication Certification Counsel in 2021.The post Collaboration, Cohesion, Community: Panel Will Explore Three Points of the New Communication Compass appeared first on FIR Podcast Network.
9 May 2026, 8:42 pm - 23 minutes 35 secondsCWC 113: How AI impacts PR agencies and solos (featuring Karen Swim and Michelle Kane)
In this episode, Chip is joined by Karen Swim and Michelle Kane of the That Solo Life Podcast for part one of a special crossover episode exploring the practical effects of AI on agencies, solos, and the communications industry.
Karen and Michelle share their view that AI is no longer optional. Practitioners who resist it risk falling behind, while those who embrace it can dramatically expand their capabilities. The conversation goes beyond basic content creation, exploring how AI can elevate strategy, reinvigorate professional skills, and free up time for deeper, more creative thinking.
Chip, Karen, and Michelle also discuss the importance of treating AI like a new employee — providing context, voice, and guidance to get the best results — and address common concerns around ethics, privacy, and copyright. They encourage communicators who haven’t revisited these tools recently to dive back in, as the technology has advanced rapidly and shows no signs of slowing down. [read the transcript]
The post CWC 113: How AI impacts PR agencies and solos (featuring Karen Swim and Michelle Kane) appeared first on FIR Podcast Network.
6 May 2026, 1:00 pm - 23 minutes 5 secondsALP 303: Preparing for your agency’s group presentations and pitches
In this episode, Chip and Gini open with the analogy of Canadian doubles, the tennis format where two players face one. If your team outnumbers the prospect, you don’t project strength, you project awkwardness. But the conversation goes well beyond headcount.
A little preparation goes a long way in making sure every seat on your side is justified. You’ll want to match expertise to whoever the prospect brought, which requires actually knowing who’s coming. Gini described a recent pitch where she reverse-engineered her attendee list based entirely on who was showing up from the prospect’s side. That’s not logistics, it’s strategy. And whoever is in the room during the pitch needs to be the person doing the work after the contract is signed — not a handoff to a team with no context and no ownership.
Both Chip and Gini are emphatic that the meeting itself should not feel rehearsed like a school play. Agency owners who show up prepared to have a real conversation before pitching solutions will stand out. Harder for many owners is knowing when to keep quiet. Interjecting while a team member gives an imperfect answer undermines their confidence, signals to the prospect they can’t be trusted, and makes them rely on you. The debrief after the meeting is where the coaching happens. [read the transcript]
The post ALP 303: Preparing for your agency’s group presentations and pitches appeared first on FIR Podcast Network.
4 May 2026, 1:00 pm - 31 minutes 28 secondsFIR #512: The AI Shift in Executive Decision-Making
While there’s no evidence that business leaders are outsourcing the most important decisions to AI, there are reports that many executives are relying on AI to make many — in fact, most — of their decisions. The implications for communications could be huge.
Links from this episode:
- AI Is Changing More Than Work, It’s Rewiring Executive Decision-Making
- Inside the C-suite: How AI is quietly reshaping executive decisions
- AI and the future of human decision making
- C-Suite Executives Dominate AI Decision-Making as Strategy Becomes Priority
- Decision-Making by Consensus Doesn’t Work in the AI Era
- How AI Is Transforming the Way Executives Lead
- Leadership at a Turning Point: How AI Is Shaping Executive Decision-Making
- Can AI Make Executive Decisions?
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Neville: Hi everybody, and welcome to episode 512 of For Immediate Release. I’m Neville Hobson.
Shel: And I’m Shel Holtz. The inspiration for this week’s report came from a post Brian Solis wrote recently. In it, he argued that AI isn’t just changing work — it’s rewiring how executives make decisions. Once Brian put that in my head, the trend started standing out in other things I was seeing. I’ll summarize the numbers and what they mean for communicators right after this.
The numbers Brian pulled together are honestly alarming. A Confluent study of UK private sector leaders found that 62% of executives now use AI to make the majority of their decisions. That’s not some — it’s the majority. 70% say they second-guess themselves when AI disagrees with them, and 46% say they rely on AI more than their own colleagues.
On the U.S. side, SAP’s research found that 44% of C-suite executives would reverse a decision they had already planned to make based on AI input. 74% place more confidence in AI advice than in the advice they get from family and friends. Meanwhile, McKinsey reports that 92% of companies plan to increase their AI investment over the next three years, but only 1% — 1 percent — describe themselves as mature in deployment. The money to pay for AI and a sort of blind trust in its abilities are racing ahead of the internal competence to use it. Now, I want to be clear before I go on. I’m not anti-AI, Neville — you know this. Anyone who listens to the show knows I’ve been beating the drum for AI as a tool for communicators and for business in general for a long time.
AI as a thinking partner, a research assistant, a stress-tester for ideas — that’s enormously valuable. But there’s a meaningful difference between using AI to inform a decision and using AI to make the decision. And Brian puts this well: AI is becoming the new executive influencer. The problem is that it hasn’t earned that role, at least not yet.
So let’s talk about what this means for those of us in communication, because the implications are everywhere. Start with employee trust. The implicit deal between an organization and its workforce is that the people at the top got there because they have judgment and experience and pattern recognition that the rest of us don’t have — or at least they’ve been able to employ it really well and get noticed by the people who promote you into those leadership decisions.
That’s the story leadership tells, and it’s the story employees buy into. Now imagine the all-hands where the CEO announces a major restructuring, and somewhere in the Q&A, or worse, on Blind or Reddit a week later, it comes out that the decision was essentially handed to a chatbot. What happens to confidence in leadership? What happens to engagement? What happens to the social contract that says, follow me because I know where we’re going?
You can’t credibly ask people to bring their full selves to work, as they say, while you’re outsourcing your own judgment to a language model. Now extend that to external stakeholders — investors, customers, regulators, the board. They’re paying, and in a lot of cases they’re paying a lot, for executive judgment. If a strategic call goes sideways — and you know that happens — the explanation that the AI suggested it isn’t going to land well.
It’s going to sound like an abdication, because it is an abdication. And from a crisis communication standpoint, “we trusted the algorithm” is one of the worst defenses I can imagine. I don’t expect that anybody’s going to say that, but it doesn’t mean it’s not going to come out. Just ask anyone who’s worked an aviation incident, a financial services failure, or a healthcare AI misfire. Imagine the reaction when either the leader tells people, or they learn through a third party, that the afflicted stakeholder hears, “Well, that’s the decision the AI told me to make.”
And there’s a third implication that I think communicators need to surface inside our organizations: the erosion of dissent. I find this particularly interesting and disturbing.
Confluent found that 65% of leaders say decision-making has become less collaborative since adopting AI. The Harvard Business Review just ran a piece arguing that consensus is dead in the AI era. That may be — but debate isn’t consensus. Debate is the friction that exposes bad assumptions. It’s what didn’t happen at that auto manufacturer — I think it was Volkswagen with their emissions standards. They didn’t have the psychological safety to feel safe in dissenting against the decisions being made. In this case, we’re not even looking forward at the leadership level in some cases. If AI is pushing aside the colleague who would have pushed back, whatever process your organization had for dissent just stops functioning. And when dissent dies, so does the early warning system communicators rely on to spot reputational risks before they get out of control.
So what do we do? A few things. We push for governance — and if you already have a governance model, push to revisit it. Your governance needs clear declarations of which decisions AI informs versus which ones it actually makes. We coach our executives to talk publicly about how they actually use AI, with appropriate humility, before the question gets asked for them.
We build the internal narrative that human accountability is non-negotiable, no matter how good the model gets. And we keep reminding leadership that machine confidence isn’t the same as strategic clarity. Brian’s right: AI is a test of leadership. It’s also, increasingly, a test of communication. Neville?
Neville: Well, just to set my position clear on this, too — I’ve been a drum-beater for AI as a research assistant, as a useful tool, since GPT first came out. The initial kind of hysterical enthusiasm was tempered over time, but I use the tool every single day in what I do for work, or for pleasure for that matter. So it’s something I believe strongly in. But I’ve got this, how could you say, in the back of my mind always — this thought that I don’t accept blindly anything the AI assistant tells me. If I’m researching something, for instance, I’m going to make a recommendation about something, let’s say, or I’m writing a report or even something relatively simple like an article for the blog. If I felt I wanted to say this and it’s telling me that, that’s a simple decision: I’m either going to follow it or not. Typically when that happens, I’ll ask it questions to further that angle. But this is something else, what Brian writes about. And The Register — I’ve read their piece — tempered with a bit of hysteria, it seems. I mean, this is a very alarmist piece, or argument, you could say. If it’s saying, as it is — the survey that The Register reports on — 62% of leaders of private sector companies, and according to The Register that’s owners, founders, CEOs, managing directors, the C-level leaders of various types of companies. They didn’t say sizes. But they use AI to make the majority of the decisions, which leads to some of the alarm bells ringing that you outlined. What if it gets out that the AI made a decision when something goes south? You could flip that. What happens if it gets out that an amazing decision that led to the company being massively successful was actually made by an AI?
I think it’s inevitable you’d have that sort of focus on it alongside more sane arguments, perhaps. You could argue, well, that CEO is pretty smart that he used an AI to help him do that — as opposed to the other side, which is, gee, we’ve got to fire this guy, he used an AI and it went wrong. So you’ve got to put some balance there. Also, I think you mentioned this earlier, and I agree with you, that there are two angles to every question we might ask about this. One is internal, within an organization, and the other is external. So it is an interesting point. And one thought I had in my mind, the pragmatic question: if a leader changes a decision he or she has made because the AI assistant suggests something different, who actually owns that decision in the end? In fact, whether he changes his mind or not, if the AI said, “I recommend you should do this, and here are the 10 reasons to support that idea,” that are different from what the leader was going to do, and he or she made the changed decision based on that — who actually owns that decision? Or, as I asked myself, is that really the most important question to be answered? But it’s still a natural one to arise. And yes, we could run through a long list of the implications in this scenario for the employees of the organization, other stakeholders, and the external audiences. But I have to say Brian’s arguments are well made. He sets the scene — the executives are relying heavily on AI. From there it goes more into the alarm function.
Judgment being reshaped — the judgment exercised by a leader is obviously so flaky that it can be reshaped by the AI assistant. In other words, that individual is willing to let that happen. I wonder whether this is all part of, perhaps, the speed with which people are expecting decisions to be made. Indeed, something I was doing this weekend — we’re on a holiday weekend here, by the way, so I had time to do this — that was nothing to do with work. It was a personal thing I was involved with that required analyzing a document that had a lot of financial information in it. I asked my AI assistant, in this case Claude, as part of my experiment with Claude, to summarize it and pinpoint the key aspects. It did that in about 20 seconds. And that was enough for me to know what questions I would need to ask it next, to develop it the way I want, rather than starting from scratch trying to do that. So there’s the benefit. But I think treating AI like a trusted advisor, to me, makes a lot of sense. And I’m trying to balance that thought with the alarmist approach — you know, this is a bad thing, all these terrible things are going to happen, and it will all come out. So how does that gel with treating AI like a trusted advisor? Although your point, I agree, it hasn’t earned the trust in the context of this conversation. So does it mean leaders are willing to override their own decisions or instincts based on AI input? Well, according to The Register, 62% have said they are, I suppose. If that’s true, I think we’re in trouble already, before this gets any further. So the real challenge — I think you’ll agree with this, Shel — is not the tech at all. It’s the leadership aspect, the human behavioral aspect of this, as is so often the case. When people talk about the relationship between the human and the AI and they just talk about the tech, it’s not — it’s a human issue. Cut through all the alarm bells and pluck out something which to me is extremely important, that really doesn’t get much airtime in Brian’s report at least: isn’t this really about the whole point of judgment? That someone in a leadership position in an organization is in that position partly because he or she is very good at exercising judgment in the work they do or the decisions they make. Are we saying that judgment is so fragile that an AI could just overturn all of that in an instant and lead all this? I guess my point is that I’m noting this. I listened to what you said. I haven’t read all the surveys you mentioned, or the other reports — the Harvard Business Review, for instance — I will. But I find this literally the worst-case scenario, and that’s being pitched as, you know, this is upon us, based on The Register, which, by the way, has a — let’s call it interesting — reputation over the years for some of their reporting. But this is very factual; their own report is actually quite well written. So what do we make from this then? Should we be worried? I don’t think we should, if we see this as simply something to note and look at as a communicator — let’s say the role you’ve got in ensuring that the CEO isn’t going to have his or her judgment completely overwhelmed by an AI. I just find the idea of that frankly ridiculous, in the sense of, well, not implying or even saying that this is the norm. It’s a result of surveys. There’s other research also supporting some of this, I think. But we should put it in perspective: this is, I guess, an inevitable discussion point that’s emerging at this stage in the development of AI and organizations. We’ve reported recently on this podcast how leaders are taking ownership of the AI deployments in their organizations. That doesn’t mean to say every company is doing this, because they aren’t. But we’re seeing that, and then we’re seeing other reporting we’ve commented on — that employees and other stakeholders related to an organization are unhappy with what’s happening with AI rollouts in their organization. So you’ve got all these mixed messages coming left, right and center, and now this. It doesn’t mean we should — oh my goodness — stop doing this, or have a meeting with the CEO and say, “What are you doing?” No, I don’t think so. But we need to note this nevertheless. I don’t believe this is something we should all get terribly alarmed about, to be honest, as long as we apply our own common sense to observing what’s going on and making sure we understand the CEO we’re supporting as communicators — let’s say the leadership teams — that this isn’t happening.
Shel: Well, I don’t think this is the most important issue we’re facing with AI. I do think it’s a time to worry. Now, I will say I don’t imagine that the CEOs leading the world’s biggest companies — the Jamie Dimons, the Josh Domaros, the Tim Cooks of the world — are using AI to make important decisions. And you have to wonder, because I don’t think they asked, in the survey they did, what types of decisions these CEOs are making. Are they the game-changing decisions, the most important decisions they have to make, or are they lower-level decisions? We talk about AI taking all that drudge work off the table. Are they allowing the AI to make decisions associated with that kind of work? But I think, as people — and CEOs are people — as they get accustomed to letting AI make decisions, it might get easier and easier to turn bigger and bigger decisions over to AI as time goes by. With any luck, AI is going to get better and better and may earn that trust. But this would cause that decision-making instinct that leaders have, based on their experience and their judgment and the other things that got them to that level, to atrophy. I mean, atrophy is happening elsewhere as a result of AI among some groups of people — the ability to write your own thoughts down, to craft your own email, to conduct your own research. As far as CEOs making good decisions with support from AI, I think support from AI is going to become table stakes. I think CEOs who don’t know how to use it are going to become dinosaurs in fairly short order — not necessarily the ones who have the job now, but I don’t think you’re going to see people getting promoted into that position, or hired into it, if they don’t know how to use AI for decision support and the other things we see AI being used for very effectively at leadership levels. And leaders are using AI, according to most of the research I see. I wonder, though, if they start turning more and more decisions over to AI, what is the board or the owner going to see as the value of the CEO? If most of this work — or much of this work, the majority according to that Confluent study — is being done by AI, does that mean the enormous salaries being paid to the people at the top of the organization are going to decline? Or does it mean that the role changes altogether, or maybe even ceases to exist in favor of some other model? And by the way, I’d love to see the same question posed to people at other levels of the organization, because this probably is not something confined to the C-suite, this turning decisions over to AI. I wonder how much it’s happening in middle management. I wonder how much it’s happening among frontline workers. If it’s at the same level, then it’s a company-wide issue that needs to be addressed, because there are going to be some problems that emerge if we don’t — I mean, along the Volkswagen lines with their emissions scandal. Dieselgate, exactly. Yeah.
Neville: That was Dieselgate, as it was dubbed. I mean, it’s a good point you make. I agree. And the point you made earlier, too, is actually a critical question: what kind of decisions are we talking about here? Is it on the scale of, let’s proceed with the merger with this company rather than that one? Or is it something like, should I fit in a stopover in this city on my way to that city to meet with these people and so forth and achieve these things? Is it that? Or is it even something more prosaic? You know, what do I get my wife for her birthday next week? I’ll have my secretary do it — but the AI could tell me. I mean, that’s ridiculous, actually. But it’s significant to know what kinds of decisions we’re talking about, because I’ve not seen it referenced. It’s implying — and people are jumping, obviously, on this — that these are the kind of organization-affecting major decisions that are suddenly at risk because an AI is doing it. I find that ridiculous, to be honest. So we need to know what kind of decisions.
Shel: Yeah. I mean, in my industry, there’s a go/no-go decision on pursuing a project. I cannot imagine, in my wildest imagination, in my organization, anybody turning that decision over to an AI. But what if somewhere in the industry they do, and end up pursuing a project that ends up being more trouble than it was worth? Somebody in the organization at that leadership level, who was involved in the previous discussions, would have known for various reasons, but the AI didn’t have the experience and the insight that that individual had. That could be a financial problem for the organization.
Neville: So the role of the communicator in all of this — and this is not to say that the communicator who works closely with the leadership teams, including the CEO and others in the C-suite, is involved in every single thing they’re doing. No, that’s not realistic, because they’re not. But the communicator’s role in preserving human judgment is the right question to ask. What is it in this context? Where do communicators fit in helping leaders balance AI insights with human insight and judgment and experience? Where do they fit in doing all of that? So the two angles I notice: internal comms — communicators act as sense-makers, ensuring context, ethics and human impact remain part of decision-making. Externally, they help articulate how AI is used responsibly in the organization, which is increasingly central to trust and reputation. That addresses the point you made about when it leaks and it gets out that AI did something. I think increasingly we’re going to see that point — articulating how AI is used responsibly in an organization — because the impact can be huge if rumor builds, which it would do: “the AI is making all the decisions in this company, and why do we need the CEO and all that?” So that’s a good role for a communicator to take on, and to be seen to be the person who is the “yes, but” person and the key advisor to leadership in these things, which strengthens the communicator’s role, in my view. So there are things we can do to address this. If this is as big a problem as these articles make out, I don’t believe it’s something we should lose any sleep over right now in the context of everything else that’s going on in the organization. But nevertheless, we’ve got respected sources — Harvard Business Review, we’ve got Deloitte talking about it, and others that we pay attention to because they’re credible publications talking about this.
Shel: Well, yeah.
Neville: Brian seeded an interesting discussion point, it seems to me.
Shel: Yeah. And let’s look at a very plausible scenario. Let’s say somebody sues the organization over a decision that the CEO made, or that leadership made, that affected them badly, and they feel they deserve compensation for that. In the U.S., anybody can sue anybody for anything. And we have seen some recent lawsuits. Look at the lawsuit that we’re seeing play out right now between OpenAI and Elon Musk.
Neville: Yeah.
Shel: And look at the records, the emails that have been surfaced in discovery. Look at the trials that have been held over lawsuits brought by the parents of children who killed themselves because they got encouragement or assistance from ChatGPT, and who sued OpenAI over that. What they got in discovery was access to the kids’ entire ChatGPT history. So you have a shareholder or a customer who sues the company, and in discovery, all of these things come to light — and that’s how it gets out. So I think even decision support has to be balanced with other input that you can demonstrate in a courtroom influenced the decision that was made, so it doesn’t look like the decision was completely outsourced to the AI. I think that’s an entirely plausible scenario in a lawsuit. So yeah, it’s something we need to consider. And as you say, and as I said, there are things communicators can do about this. One is making sure people are aware of the potential for this situation. And then, as I said, influencing the governance model so that it incorporates decision-making — if it doesn’t already have decision-making and decision support in the governance document, it needs to be added. And then making sure the leaders are talking about how they’re using it, so it never comes up that they’re using it to make a decision of importance in the organization — that it’s focused on using it in very effective ways.
Neville: Yeah. I mean, I think the picture you painted — lawsuits and stuff like that — are very possible, particularly in America, where, as you said, anyone can sue anyone for anything, usually for amazing sums of money, in the billions. So maybe what needs to happen in organizations that would address this, among other things, is keeping records. So that, for instance, in an organization that has deployed or rolled out AI tools such as chatbots — let’s say maybe their own version of something based on ChatGPT, whatever it might be — it needs to be known that those record anything you interact with on an AI. Whatever level you are in the organization, there’s a record kept along with anything else: emails, internal reports, you name it, they’re monitored and tracked in most organizations. And the fact that you could add to that picture even some of these automated note-takers, like Otter and others, that are commonly used in intrusive ways in Zoom meetings — and you hear stories of private Zoom meetings —
Shel: AI transcripts of Zoom meetings in which the decisions were made.
Neville: — where the outcomes are disclosed or leak out publicly because someone used one of these tools that summarized things, including the recommendations or suggestions if they were made by anyone. If that gets into a law case by the plaintiffs, that’ll be shown out of context — you can be sure of it. So, right.
Shel: Yeah. And that’s why a lot of organizations are saying to their employees, you can’t record these kinds of meetings.
Neville: Right. But someone will, and it’ll happen. So you need to head that off the path, as it were, and have your own structure in place and your communication surrounding it. So, for instance, you have to have very clear narratives around decision ownership, for example, that would help you in crisis situations. That’s the internal focus. Externally, you’ve got to communicate the kind of structure you have for human accountability — not “the algorithm said we should do this.” We can laugh about it, as I am at the moment, but imagine the reality of something like that happening. So I think these are all things that are plausible, I do believe, particularly in the U.S., I have to say — but hey, could be anywhere. It isn’t complicated to work out a plan of how you would prepare for things like this. But I’d rather look at it not as preparing for worst cases, although you need to. It’s just a switch — flip it over a bit and look at the benefits of all of this. And again, not solely the communicator: the individual leader has to be willing to go along with this, has to be willing to share some of the thinking he or she is doing and the discussions with the assistant, whether it’s an AI or anyone else, to realize that you can’t do this without full transparency, at least to your advisors, including the communicator.
Shel: Yeah, absolutely. And we will be back with a follow-up episode when the inevitable headline surfaces of a company that gets in trouble because it’s revealed that the CEO abdicated a decision to AI. Until then — actually until next week — that’ll be a 30 for For Immediate Release.
The post FIR #512: The AI Shift in Executive Decision-Making appeared first on FIR Podcast Network.
4 May 2026, 7:01 am - 18 minutes 36 secondsALP 302: Rethink entry-level hiring to succeed in the AI era
The entry-level talent pipeline is being entirely restructured. If agency owners don’t figure out what role a young professional actually plays in an AI-assisted agency, they won’t just struggle to hire today. They’ll have no one to promote in five years.
In this episode, Chip and Gini dig into what’s happening with entry-level hiring right now, and why the answer can’t be to stop hiring junior staff altogether. The conversation covers why the old model of routine work is gone, what needs to replace it, and why agencies that don’t solve this problem soon are setting themselves up for failure.
The episode opens with an observation from Gini: every presentation she gives to college classes lately surfaces the same anxiety from students. Nobody’s hiring at the entry level because AI can handle the work those roles used to cover — news releases, media lists, social drafts, basic research. How can they find jobs today, and get the on-the-job training they need to move forward in their careers?
Chip frames the problem as a junction of circumstances: the rise of AI, economic uncertainty, and a higher education system that hasn’t evolved with the workforce reality. Colleges discouraging AI use while their graduates are about to enter workplaces built around it is, as he puts it, the same mistake as banning calculators in math class. The students coming in aren’t unprepared because they’re less capable, they’re underprepared because the institutions that trained them weren’t keeping up with the times.
Chip and Gini agree that entry-level hires aren’t obsolete, but the role must change. Instead of being the lowest rung of the ladder, new professionals need to come in already functioning like managers — just managing AI tools and processes instead of people. That requires more on-the-job training, better-documented processes and SOPs, and a genuine commitment to learning and development that most agencies still don’t have. There’s more than one upside, though. Better documentation and SOPs don’t just help entry-level hires do their jobs — they make your agency more efficient, reduce owner dependency, and, for those who want to sell someday, significantly improve the value of the business.
Their closing argument is not to avoid entry-level hiring because the old version of the role is antiquated. Rethink what the role is, invest in the systems that support it, and get comfortable assigning junior people with responsibilities that would have felt premature five years ago. The alternative is a mid-level talent shortage that will be very hard to fix. [read the transcript]
The post ALP 302: Rethink entry-level hiring to succeed in the AI era appeared first on FIR Podcast Network.
27 April 2026, 1:00 pm - 1 hour 33 minutesFIR #511: Doing AI Governance Right and Still Getting It Wrong
The policies are clear and well communicated. The guardrails are firmly established. Every last employee has been trained. And someone in your organization still releases a public document riddled with AI-generated errors. What went wrong has nothing to do with technology and everything to do with internal culture and accountability. In this long-form April episode, Neville and Shel examine a company that seemingly took all the right steps yet still had to apologize publicly for a court filing riddled with hallucinated citations. Also in this episode:
- Gartner predicts that, by 2028, 75% of employees will rely on an internal chatbot to get the news that matters to them. How will internal communicators need to rethink their role to ensure everyone knows and understands what they should in order to achieve strategic alignment?
- One of the promises AI executives have made is a leveling of the playing field, giving lower-level employees the opportunity to excel and rise through the ranks. According to one new study, exactly the opposite has been happening.
- PR hacks have been accelerating the pace at which they churn out press releases and pitches. That has raised the bar for what it takes to earn a journalist’s trust (and journalists do still rely on press releases, according to a survey of reporters).
- Apple’s announcement of its CEO transition offers communicators a clinic on how to announce a new top executive.
- “Slopaganda” from Iran has proven remarkably effective, which means it is undoubtedly coming for your company or clients soon.
In his Tech Report, Dan York outlines big changes coming with WordPress’s next update.
Links from this episode:
- Elite law firm Sullivan & Cromwell admits to AI ‘hallucinations’
- Sullivan & Cromwell law firm apologizes for AI ‘hallucinations’ in court filing
- Letter re: In re Prince Global Holdings Limited, et al., No. 26-10769
- Sullivan & Cromwell Just Put Every Firm on Notice. And S&C Advises OpenAI on Safe AI Use.
- An AI Screw-Up By… Sullivan & Cromwell?
- LinkedIn search results for Sullivan & Cromwell AI
- AI, Trust, and the Reinvention of Corporate Communications: Inside Gartner’s 2026 Playbook
- Does your intranet still matter in an AI-first workplace?
- Chatbots in Internal Communications: Game-Changing Wins
- How AI Chatbots Are Redefining Internal Communications?
- The future of internal communication: How AI is changing the workplace
- High earners race ahead on AI as workplace divide widens
- Sarah O’Connor: One early view about AI was that it would share…
- How AI is forcing journalists and PR to work smarter, not louder
- What journalists want from AI-assisted PR pitches
- Journalists Trust Human-Written Pitches Over AI
- Journalists Reject AI-Generated Press Releases As Untrustworthy
- What communicators can learn from Apple’s CEO transition announcement
- Tim Cook to become Apple Executive Chairman; John Ternus to become Apple CEO
- Iran’s Meme War Against Trump Ushers In a Future of ‘Slopaganda’
- Iran’s ‘slopaganda’ team uses AI Legos to flood social media
- Slopaganda wars: how and why the US and Iran are flooding the zone with viral AI-generated noise
- Slopaganda Comes of Age
- Alberta separatist leader unconcerned about influence of YouTube ‘slopaganda’ videos
Links from Dan York’s Tech Report
- WordPress 7.0 Source of Truth – Gutenberg Times
- WordPress 7.0: Real-Time Collaboration Arrives in Core
- WordPress 7.0 Release Party Updated Schedule
The next monthly, long-form episode of FIR will drop on Monday, May 25.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel: Hi everybody and welcome to episode number 511 of For Immediate Release. This is our long-form episode for April 2026. I’m Shel Holtz in Concord, California.
Neville: And I’m Neville Hobson, Somerset in England. We have six great stories to discuss and share with you this month and to delight and entertain you, we hope. Topics range from the consequences of not following company guidance on AI use, chat bots, employee use, and the workplace divide, using AI to work smarter, what we learned from Apple’s CEO transition announcement, and the future of slopaganda. Lovely word, that one, show. Plus, Dan York’s tech report.
But first, let’s begin with a recap of the episodes we’ve published over the past month and some listening comments. In the long form episode 506 for March, published on the 23rd of March, our lead story was on Anthropic’s view that AI will destroy the billable hour, a topic we’ve talked about before on FIR. We also explored digital monitoring of employee work, Gartner’s prediction that PR budgets will double next year, the escalating misinformation crisis, and Cloudflare’s prediction that
bot traffic will exceed human traffic by 2027. That’s next year, by the way. On LinkedIn, you’ll find no shortage of posts stridently deriding the notion that anyone should ever use AI to write them. In FIR 507 on the 30th of March, we rejected roundly that idea and looked at the actual trends in using AI for writing. And that prompted some comments from listeners, right?
Shel: Yes, it did. Starting with Susan Gosselin, who’s actually with a client of mine back in my consulting days. She writes, there are many types of writing that I think AI is great for interpersonal communications, summaries, et cetera. But for marketing writing, that’s another thing. There are issues of copyright to consider and what you’re feeding into the channel.
This article from Jane Friedman, and she’s linked to it, and we’ll include that link in the show notes, is aimed at authors, but it does have implications for marketing writers too. For instance, I work for an American IT MSP, that’s a managed service provider. Let’s say that an MSP in Spain that does our line of work sees our website and our authoritative blogs and e-books and likes it. They decide to run our whole English website into Spanish using an AI translator.
then make a few tweaks and publish. There’s not a lot to stop them. There’s also the issue of being able to defend your copyright overall. The law is not yet fixed and the risks are real. Then Steve Lubetkin writes, I find AI particularly helpful for rote tasks like organizing lists, transforming Excel spreadsheet columns, and summarizing interview transcripts. It’s also great for brainstorming ideas when it suggests perspectives I hadn’t thought of.
but ultimately it comes down to using it as a tool for further human intervention, not less. Neville, you responded to that saying that’s a great way of putting it, Steve. Those rote tasks are exactly where AI seems to shine, the kind of work that takes time but doesn’t really benefit from deep human creativity. And I agree on brainstorming too. It could be surprisingly good at surfacing angles you might not have considered. I do this a lot.
Your last point really nails it though. It’s not about removing human input. It’s about focusing it where it matters most. Used that way, AI doesn’t diminish the work. It can actually elevate it. And finally, we have a comment from Yorma Mananan who writes, AI can help people escape from the writer’s block. So why not use it to get started?
However, writers must own all content created with or without AI so the content doesn’t sound like you, you shouldn’t publish it. The challenge is to learn to speak machine English with AI. Define clearly why you were writing, what you want to say, and what you want your readers to do after reading your content. Without your strategy, AI can’t produce quality content that sounds like you. Strategy first, AI second.
And Neville, responded to Yorma. You said, I like how you framed this using AI to get past the blank page is a very practical use case. That starting friction is real for a lot of people, and AI can lower the barrier quite effectively. Your point about ownership is key too. If it doesn’t sound like you, it isn’t really yours, regardless of how it was produced. Where I’d add a layer is around your machine English idea. I see it slightly differently. Rather than learning to speak machine,
I think the real shift is learning how to think with the machine, using it to clarify intent, test structure, and challenge assumptions. But I agree with your conclusion. Strategy first, AI second. Without that, you’re just generating words, not communicating. And Yorma responded to you saying, agree. Machine thinking is a better way of describing the conversation relationship with AI.
Neville: Good comment!
Great. It’s excellent to have that. interesting, Shell, that it illustrates to me something. It’s not a trend at all, but I’ve noticed recently in other posts I see on LinkedIn that address this kind of topic. Increasingly, there’s people leaving comments that are basically saying that you own it, not the AI. And AI assists you in communicating, not creating the final stuff, essentially, which is what some of these comments are alluded to.
Maybe people are waking up to that more than they have been in the past. It won’t silence the big critics. We’ve already seen that because, you know, it’s going to be criticized no matter what. But the more people who talk up the reality of what we all talk about, which is this is an assistant. It’s a tool to help you and communicate more effectively. It enhances your ability in that context. And then you’ve got Steve’s talking about, you know, doing stuff with Excel and all this kind of thing.
I’m in the middle of an experiment on the middle. I’m still at the 10 % start of experimenting with Claude Pro that I know you’ve been using for a long time, but I’m taking this very, very one step at a time. And it’s very non techie my focus. But one thing I have noticed is comparing as I have been doing, let’s say a prompt in the simplest form, a chat prompt to Claude compared to one to ChatGPT.
The differences are truly startling in many cases. Claude typically is richer and deeper in its content with the same prompts. Now, of course, there’s variables at play here. ChatGPT knows a huge amount about me. Claude does too, because there’s a nifty tool that imports everything from ChatGP to tell it. But I’ve also added stuff. So it’s done that. It’s missing on some levels, though. And that’s probably because it doesn’t yet know totally enough about me to do this.
This is something you notice when you do this kind of thing with different tools. So I mean, that’s not the main thing about Claude that wow’s me, I must admit. Cowork and some of the other tools I’ve been touching on, but Cowork I’ve spent quite a bit of time on. But so I’m sure we’re have lots more conversations about this as we talk topics. Let’s see what comes out of today’s menu of topics. So thanks for those comments, everyone.
Let’s see, bad, no, that’s not the right one, this one, when workers lose their jobs, many turn to gig work to earn income while waiting for new opportunities. Increasingly, companies are hiring gig workers to create content and train AI systems. This raises various communication and ethical issues. And in FIR 508 on the 8th of April, we explain what’s happening and discuss the implications.
Then when bad actors use AI tools to clone a musician’s voice and upload synthetic versions of their songs, they can then file copyright claims against the original artist’s content. In FIR 509 on the 14th of April, we break down how this scam works, why it matters to communicators, and what you should be doing right now before an incident forces your hand. And you have some comments here.
Shel: We do two of them, one from Eric Redicop who identifies himself on LinkedIn as an entertainer and artist wrote that AI cannot use my work because it’s not posted online anywhere. I have to do it this way because YouTube allowed bogus copyright claims on my work and shut down my channel five times. then Ray Baron-Wolford, who is a CEO at a charity organization said, this is why it’s so important that every artist signs up.
to all copyright protection services.
Neville: Yeah, that’s a good point. I think the the first commenter though talked about a genuine issue, a genuine issue. And I’d wonder if, you know, if he’s saying that this can’t happen to me because none of my content’s online. I wouldn’t rely on that 100%. Actually, I wouldn’t. No, I wouldn’t. And that second comment.
Shel: Mm-hmm.
No. I mean, you have to be producing
the kind of content where you can have some success as an artist or an entertainer without having your content online.
Neville: Yeah.
Yeah, exactly. So the second commenter about signing up for every copyright protection that you can find is probably well, not probably it is a good idea, although I’m not sure that everyone would want to do that. And therein lies one of the issues about copyright. It depends on the jurisdiction. It’s a geographically based protection. Creative Commons is a good place to have established as a…
reserving your rights or some of your rights if you want to enable things to happen for others to use it. And that’s an international thing. So that’s a peace of mind, I would say. There hasn’t really been many, I’ve certainly not seen any legal court case tests since Adam Curry back in 2009 when he, I think it was in the Netherlands, he sued somebody who’d used a photo of his daughter.
and won the case. And it was not a Pyrrhic victory, but there was no you didn’t get any money out of it. But he got the legal ruling that these people had infringed on his copyright. But I’ve not seen any sense. So nevertheless, it’s worth doing. So yeah.
Shel: Yeah, it goes back to Susan Gosselin’s
comment, too, about any organization that does the same thing you do can take your content from your website, translate it into their language and publish it. And what do you do if your content is not copyrighted? There’s nothing that you can do.
Neville: Yeah, that’s incredible.
Reminds that reminds me shell back in the 2000s website scraping was it was a huge deal when when blogs suddenly came to the fore and you found that people were stealing all your content. I remember being in a lengthy email exchange with someone I think based in Romania or somewhere not a hope in hell. He was going to desist from doing that. Eventually it stopped though. So but I mean and that wasn’t from a copyright perspective. It was theft of content, which is well related, isn’t it?
So yeah, lots to learn from that. And finally, in FIR 510 on the 20th of April, we revisited the topic of shadow AI, the situation where employees ignore company approved AI tools and use their own preferred tools and not tell anyone. We discuss how one company approaches problem and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains. And now you’re up to date on FIR episodes.
Shel: Also want to let you know about Circle of Fellows. We’ve had a fascinating discussion just this past Thursday on Circle of Fellows. It’s part one of a two-part discussion, and it’s all based on this. If you’re watching the video, you can see it’s a new book by Diane Chase, former chair of IABC and a great communicator called The Seven C’s of the New Communication Compass. And what Diane did here was…
outlined these seven points and find words that started with C to label them. And she wrote one chapter and then basically had IABC fellows write the rest of the chapter. So Diane is the first non-fellow to appear on Circle of Fellows, but it’s her book, so it made sense. And in the first installment that we recorded on Thursday, Diane was joined
Joining the panel were, of course, me — I moderated the session, Jane Mitchell, Ginger Holman, and Brad Whitworth. Next month on the May episode of Circle of Fellows, Brad Whitworth will be the moderator, and I’ll be a panelist talking about the chapter I wrote about community, and I’ll be joined by Zora Artis and Cindy Schmieg, IABC fellows who wrote the other chapters.
It’s a really good book. I recommend it for communicators. But we talk about some of the issues around these chapters and Diane explains why these were the topics that she said in the new communication environment. These are, you know, your North Stars, as it were. So definitely worth giving a listen to this month’s Circle of Fellows, which you will find on the FIR Podcast Network.
at firpodcastnetwork.com. And we are going to take a short break now for a sponsor message. We will be back to dive into our six topics right after this.
Neville: Here’s a story that on the surface looks just like another example of AI going wrong. In mid April, one of the world’s most prestigious law firms, Sullivan and Cromwell, had to apologize to a US bankruptcy court after submitting a filing that contained multiple AI hallucinations, fabricated case citations, misquoted legal authorities, even references to cases that simply don’t exist. The errors weren’t minor.
They were significant enough that the firm had to send a formal letter to the judge, acknowledge what had happened, and submit a corrected version of the filing. And just to make it more uncomfortable, these mistakes weren’t caught internally. They were identified by the opposing legal team. Now, if you stop there, it’s easy to frame this as just another cautionary tale about AI, unreliable tools, hallucinations, the risk of automation in high stakes work. But that’s not the story here.
This firm didn’t lack guidance, quite the opposite in fact. They have formal policies governing the use of AI. They require lawyers to complete training before they can even access these tools. Their internal guidance explicitly warns about hallucinations and tells lawyers to verify everything before it goes anywhere near a client or a court. In fact, their own language is very clear, trust nothing and verify everything. And yet, in this case, those policies were not followed.
A document that should have been scrutinized at multiple levels made its way into a courtroom with fundamental inaccuracies baked into it. The failure here wasn’t the technology. It was a failure of process, behavior, and accountability. Human in the Loop only works if there is an actual human who is clearly responsible for checking the work, not in theory, nor in a policy document, but in practice, at the point where a decision is made to send something out into the world.
And what this case suggests is that in many organizations, that loop is more notional than real. If AI is being used to accelerate work, where are the safeguards that ensure quality isn’t being compromised in the process? And are those safeguards actually being followed or just assumed? Having a policy is one thing, embedding it into how people actually behave, especially under time pressure, is something else entirely. And I think that’s where this story really matters beyond the legal profession.
Behavior is moving faster than governance. People are experimenting, they’re finding shortcuts, they’re integrating these tools into their daily workflows, often quietly and informally. The risk isn’t only that AI gets something wrong, it’s also that humans stop checking as rigorously as they should, or assume that someone else had, or trust output that feels authoritative, even when it hasn’t been properly verified. So when we talk about responsible AI or human-centered AI or governance frameworks,
This is what it comes down to in practice, not whether you have a policy, but whether at the moment that matters, someone takes responsibility for asking a very simple question, is this actually correct? And if this case tells us anything, it’s that answering that question consistently is still much harder than many organizations seem to think.
Shel: Boy, isn’t that true. And the first thing I thought when I read this story, because it seems like the organization did everything right, is the question about what are people rewarded for in this organization? I wrote a post about this on LinkedIn probably a couple of months ago, that process speaks louder than any message that you send.
through communication channels. And I included this story that I’ve probably relayed on this podcast 20 times just because it is such a good analogy to really make this clear. The company, the logistics company that was experiencing a lot of breakage of its packages in one of its distribution centers and…
the company just kept sending messages about how important it was to be careful and take the time when you’re loading these packages so that you’re not just throwing them around and you’re not breaking things that customers are expecting to receive unbroken. And the breakage continued and they brought a consultant in, communications consultant I might add, who looked at all of this and found that the reason this was happening was because people were actually being rewarded for productivity.
and not for quality. That meant that they were getting money for doing this quickly. So as long as you were going to pay them more to get this stuff out quickly, the breakage was going to continue. You had to shift the rewards mechanism. So it was rewarding quality. Now they’re going to slow down and make sure everything’s unbroken. Of course, you’re going to lose some of the speed there.
So when I hear the story about this law firm, the first thing I wonder is, yeah, we have all of these policies and we’ve been through all of this training and the governance is in place, but I’m being rewarded for getting this done quickly. And therefore I’m not going to take the time to review the citations that the AI cranked out. I don’t know that this is the case in this organization, but it was the first question I asked.
Neville: Well, that’s an interesting point, Shel, because that was in my mind, too, but it didn’t make it into my notes that they’re known to be, they are, I think, the second biggest law firm globally. They’ve been around 150 years, long and well established, highly credible, super reputation, all that. But they have some of their lawyers charging out at around two and a half thousand dollars an hour for their services. That’s serious money.
And that kind of adds to your point about speed is the focus here, not the quality. Now, to repeat what you said, we do not know if that’s the case in this law firm. But it could be, and your other point you make about that an individual might be saying, I don’t have time to check all this stuff because I’m being rewarded for getting stuff out fast. They have to address that if that’s true. They really have to address that because that perpetuates this. If it turns out that it’s true.
But it brings, I suppose, to my mind, the reality, all these policies, and there’s a lot of reporting on this that’s around if you look for it, talking about some of the training courses, the fact that a lawyer, no lawyer can even use one of their AI tools unless they are certified to have done this, this and this training program or that video they have watched, all that. And yet this happened. So there’s something out of loop here. There’s something not
not working properly. Could it be as simple as the person who signs off on this, i.e., this piece of work is going to that client or this filing is being submitted in this bankruptcy case in New York? This was a major bankruptcy case by not individual. I believe it was a financial company in the Virgin Islands, the British Virgin Islands for that matter, in the Caribbean. It was high profile. But could it really be as simple as that? All this
work going on at speed. that’s probably somebody thought that’s fine, because we’re going to check it all. And yet nobody did. So it signals something we’ve encountered before. And I’m reminded of a case we reported last year about Deloitte. And the issues they had was something similar, but it was not a legal case in a court. It was a report they prepared for a client, which happened to be the government of Australia, and another one in the government of Canada have six figure fees.
riddled with hallucinations and other things. So somebody didn’t check it in that case. I have no knowledge of what training they have in place. In this case, we do have knowledge of what training they have in case. Could it really be as simple as somebody, an individual, isn’t the known responsible partner in the law firm for the authoritative voice on this is okay to send to that client or to that court or whatever.
Even if 15 other people have been involved in checking stuff before, that one person has that responsibility. They obviously don’t have that, suspect. Maybe that’s the solution to this kind of thing. you I know you have some strong views on, you know, having a verifier in place in organizations. You want to talk a bit more about that?
Shel: Well, yeah,
I mean, I’ve said this before. I think that one of the jobs that AI is actually going to lead to the creation of is a verification specialist, somebody who is accountable and knows they’re accountable. They are baked into the process. It gets passed to them and they verify the entire document. I don’t care if it’s an 80 page filing. You know, there was another law firm.
that found itself in this trouble recently. It was in Oregon and the court of appeals there sanctioned the lawyer involved for the AI errors that were in the law firms filing. And the court in its finding…
emphasize that AI isn’t a lawyer and it can’t replace professional judgment or accountability. And that principle travels pretty well. AI is not a communicator. It’s not a strategist. It’s not a lawyer. It’s not an HR expert. It’s not a subject matter expert. It’s a tool. The professional has to be accountable. So for communicators, that means we can’t outsource accuracy. We can’t outsource context. We can’t outsource tone or ethical judgment to a machine.
We can use AI as aggressively as we can find that it helps us do our job, but we still have to verify ruthlessly and we have to make sure that other people in the organization know that that’s part of their remit too.
Neville: Yeah, it’s a very tricky one, think, Shel, given what we know currently about the developments that are happening with artificial intelligence, particularly in generative AI, particularly with tools, particularly like Claude and ChatGPT and Gemini, where and Steve Lubetkin in his comment to one of the episodes from last month alluded to that when he talked about how it is great to, know, deciphering columns in Excel spreadsheets.
Here you’ve got a tool that can actually generate the AI spreadsheet and perform literally everything in analytics, pivot tables that works literally in 20 seconds. And so you suddenly find that here you have a tool that is able to generate content that the traditional way of prompting would have taken considerable to and fro.
And, you know, changes here, editing there, you turn the AI, not this, that’s coming back to all that sort of stuff. And then someone checking it. And here you’ve got the situation where this is accelerating, it can do these things, arguably, and again, depending what it is, it can do things that until now, people would say AI can’t do that. And I’m thinking that, you know, AI is not a communicator. No, it’s itself is not at the moment.
So I mean, this will take us down a rabbit hole if we get into this, which we’re not going to do. But it’s a point worth noting that sooner or later, we’re going to have an AI tool of some type doing something only before a human could do. And then where are we? So again, that’s all a bit in the future and maybe sooner than we think. I don’t worry about that in the sense because there’s no point, Shel. It’s not happened yet. But I do worry about things like this because
This is an easy one to get right, it seems to me, that you got all these policies, et cetera, you got to not so much enforce them. That’s not really the right word to use. It’s to ensure that people follow those policies. Therefore, it’s a communication issue. It’s an educational issue. It’s not a training issue, but it’s education, awareness raising, and getting people to buy into why they should do this, in which case, you’re likely going to have to change your model of rewarding people in that case.
That’s big deal. So this isn’t something that you can idly do except on the kind of surface, i.e., you do all this stuff, you’ve got one person who’s got the responsibility and the consequences will fall on that person if it turns out no one followed the stuff. So that’s probably what would help here.
Shel: Yeah, and I think it’s also worth noting that it’s going to get easier to assume that AI got it right. I mentioned that AI currently isn’t a subject matter expert, but it’s becoming one that we have. OpenAI is creating one that’s just for doctors and Anthropic just signed a deal with a law firm to create a legal specific version of Claude. So, you know, I think when you
look at what happened here with this law firm. We should look at this as sort of a dress rehearsal for AI related crisis response. The law firm did the right thing, right? They acknowledged the problem, they apologized to the court, they filed a corrected version, but at that point, the reputational damage had already been done because that narrative…
had found its way into Reuters, The Guardian, Business Insider, Above the Law, LinkedIn, and all the legal newsletters. And that’s how AI failures will unfold for other organizations, whether it’s out of the legal department or elsewhere. You’re going to have the operational error, then the public narrative, and then people are going to pile on. Communicators should already have holding statements, internal FAQs, and escalation protocols for AI-generated errors, especially
In high stakes content like a legal file.
Neville: Yeah, plenty to think about on this. although the kind of advice I would give is, yes, you’ve got all your policies and so forth, as we’ve been discussing at the beginning, but have you got the human genuinely in the loop to take responsibility for what you’re giving to a client or to a court?
Shel: Well, let’s stick with the AI theme. Hey, that should be no surprise. Gartner is predicting that by 2028, 75 % of employees will rely on chatbots to get relevant internal communications. That’s not the distant future, folks. It’s the year after next, and that should stop every internal communicator in their tracks. Not because chatbots are coming for the intranet or the newsletter or the manager cascade. That’s just
Too simplistic. The bigger shift is that employees are moving from browsing to asking. They’re not going to hunt through the intranet and a stack of emails to get an answer to a simple question. They’re just going to go to the chat bot and ask, what has this changed for me? Do I need to do anything by Friday? Why is my department being reorganized? And they’ll expect an answer in seconds and probably get one. The Gartner prediction is based on a very real problem.
information overload. According to Gartner’s report, employees who report high information overload are 52 % less likely to report high intent to stay with their organization, so it’s a retention issue, and they’re 30 % less likely to report high strategic alignment with the organization. Gartner also says chatbots will provide personalized curated answers for pull communication and customized alerts for push communication.
That’s a major shift in the employee communication model. Now there are real benefits here. A well-designed internal chat bot can give employees faster answers, reduce HR and IT ticket volume, provide 24-7 support, support multiple languages, and cite authoritative sources so employees know where an answer came from. It can also deliver information within the flow of work rather than forcing people to go somewhere else to find it.
But here’s the part communicators are going to need to wrestle with. An AI answer is not the same thing as communication. An answer can tell an employee what
An answer can tell an employee what changed. It may even summarize why it changed. Will it preserve the intent, the nuance, the context, and the emotional intelligence of the original communication? There’s no guarantee it will. Take changed communication, for instance. We frequently write detailed articles explaining the rationale for a change because employees need more than the transaction. They need to understand the business context. They need to know what
options leaders considered and which options they discarded and why. They need to hear what’s not changing. They need some sense that the decision was made thoughtfully and not arbitrarily. But what happens when no one reads the article? What happens when the employee asks the chat bot, what’s changing in our benefits plan and gets a clean, accurate three sentence answer that strips out the rationale completely?
This is where internal communicators have to evolve from being message producers to knowledge architects. The intranet still matters. It may be less of a destination and more of the trusted knowledge later that feeds AI. Frank Wolf made this point really well in PR Daily. AI doesn’t eliminate the intranet’s jobs. It changes how pull, push and people centered communication work.
The intranet becomes the foundation that makes chatbot answers reliable. If the knowledge layer is messy, if it’s outdated or written in a way that AI can’t interpret or interpret well, the chatbot’s going to sound confident, still be wrong. This means we have to consider an expansion of the internal communicator’s job. Yeah, we still need to write, but now we also need to structure. We need clear source of truth pages and metadata.
We need FAQs that anticipate employee questions, and we need version control, expiration dates, and more. We need to decide which information can be answered directly by a bot and which question should trigger a human response. And we need to design for narrative preservation. That means writing source content with AI retrieval in mind. If the rationale for a change matters, don’t bury it in paragraph eight. Make it explicit. Label it.
Repeat it in a concise why this matters section. Smart brevity writing would be a great approach to adopt here. Create approved answer blocks that the chat bot can draw from and test the bot by asking questions employees are likely to ask, then check whether the answers reflect not just the facts, but the intended meaning. This also has implications for measurement, by the way. Page views and open rates become less useful
if employees are getting answers without opening an article. We’ll need to measure the questions employees ask, the quality of the answers they receive, the content gaps the bot reveals, and whether employees understand the strategy, the change, or the policy after interacting with the system. It’s a lazy conclusion to say employees won’t read anymore, so let’s just give them chat bots. The better conclusion is employees are changing how they access information
So we need to make sure the organization’s knowledge, context, and narrative survive that shift.
Neville: Hmm. Yeah, this is a huge topic, Shell, because what struck me listening to you was in a sense of continuity of what we just talked about in the previous topic is the verification of content that an AI produces for you. How are we going to deal with that? We talk about putting in place, you know, trusted sources for all this information. So, you know, let’s say I’m an employee, I’ve asked a question on something, it’s given me an answer.
I need to check that. So what do I check that? And how do I know that it’s accurate? So you project that out to the kinds of stuff people deal with daily. And this is a huge undertaking, I would say, because looking at that article, talking about this, it has an interesting piece in there about the safeguards that CCOs are mentioning here specifically.
will need to put in place to mitigate the risks of hallucinations, misinformation, and a fragmented landscape that comes with AI, they say. CCOs will need a greater emphasis on information quality, as well as an optimizing intranet content for AI searchability. You mentioned that point. They must also partner with IT, HR, and legal to establish robust governance to ensure that chatbots responses are accurate. That’s the bit. How are they going to do that? Because something internal,
It surely isn’t going to be only producing answers based on what it finds on your internal networks alone. It must be looking out onto the wider landscape. How do you verify and check all that? That’s a major debating point for taking this further, it seems to me. So it’s a huge undertaking.
Shel: Yeah, think one of the things, and I sort of breezed through it pretty quickly, but I think that we’re going to need to figure out is how do we monitor and assess what questions employees are asking that produce an answer that’s drawn from internal communications content, whether that’s in an email that went out or something that was posted to the intranet. How do we monitor the questions that are being asked and the effectiveness of the responses so that we can make adjustments?
So that we can report that, yeah, we can determine that there is alignment on why this change was made, or we can say, gee, people are just getting an answer that tells them what the change is, and they don’t have any understanding that we looked at alternatives and we tried to find a better solution. And this was the best we settled on. And here’s why it’s good for employees or here’s how to cope with this in your department or whatever it may be. And to do this without
necessarily surveilling employees, right? We don’t want to know who asked the question. I think it would be great if we could say, wow, look at this. 70 % of this particular point of confusion that was illuminated by the questions that we’re asking are coming from people in our operations division and not other divisions of the organization. That would be useful, but we don’t want to be able to say John Doe asked this question. What an idiot.
It’s a serious issue. And I think the guidance that we need to have the information in multiple places where the AI can see it so that it realizes that this is an important topic because it is in several places and that we have it in several formats, the FAQs, the answer blocks. This is repurposing the original content in ways that will help ensure
that the AI inside your organization is delivering information with context, with those other elements that’s so important for employees to understand to create that alignment. And by the way, I mentioned the seven Cs of the new communication compass. One of them is congruence, which we’re arguing goes beyond alignment, that there is congruence in the organization. So.
If we want that, and it is important, it’s one of the reasons there is an internal communications function, we really need to start rethinking what we’re doing and how we’re doing it.
Neville: Yeah, agree. I think surveillance is a very, very slippery topic and a slope. Because you’re going to have to have some kind of process in place and it probably surveillance is the correct label. Otherwise, you have really struggled to to find the answers you will need when you if you roll out something like this. So I think
You know, we’ve reported recently on keystroke logging and other ways organizations are now requiring in place to monitor whether employees are working or not. And it’s still making news headlines in the tabloids here, a case recently about someone had this wheeze of having something that touched his keyboard every now and again to show it was working. The trouble is that the employer was savvy enough to have the software could tell which key and it was the same key all the time.
So things like that, probably going to have to rebalance this privacy versus being able to see what people are doing algorithm, let’s say. And that’s going to be difficult, given the history, I suppose, of some organizations not respecting employee privacy. Look at the China model, and that’s not what we want to have here.
state surveillance on everyone’s daily lives is pervasive in urban areas, not necessarily throughout the whole country. So do we want that? We may not actually have the ability to say no to that, given what they need to do. So that’s part of the issue to include, I think.
Shel: Yeah, and I think one other thing we’re going to have to do is more asking. We’re going to have to survey after a change and ask employees if they understood the reason for the change. And part of the problem with increasing the number of surveys, and I’ve made this argument for years, that people will take surveys all day long if they see the results of the surveys and they see that things are going to change.
If you’re asking people, did you understand this? Did you understand the rationale for it? Do you agree with it? It’s hard. I mean, you can report the results, but what’s going to change? You’re going to change maybe the way you’re producing content. That’s not going to be visible to employees. So it’s going to be a challenge to ask those questions frequently without producing that kind of survey fatigue that we hear so much about.
Neville: Big topic. OK. OK, so there’s a widely held idea about AI that’s been around almost since the beginning, that it would be a great leveler in the workplace. It’s kind of continuity of what we’re talking about here, The thinking was that if you give everyone access to powerful tools that can write, analyze, summarize code, and generate ideas, then people with less experience or fewer formal skills
should be able to close the gap with those at the top. But what we’re starting to see in the real world looks quite different. In fact, it may be doing the opposite. The Financial Times has just published new research based on a survey of 4,000 workers in the US and the UK. And the findings are pretty stark. More than 60%, a 6-0, of higher earners say they use AI every day in their work. Among low earners, that number drops to just 16%, 1-6. That’s a pretty big gap.
So instead of leveling the playing field, AI adoption is heavily skewed towards the people who are already ahead, better paid, more experienced, often in more knowledge intensive roles. I think it makes sense because using AI effectively isn’t just about having access. Most people have access. It’s about knowing what to do with the tools. It’s about having the confidence to experiment, the context to apply them to real work, and the judgment to assess whether the output is actually useful.
And those are things that tend to come with experience, with education and with the kind of roles where you have a bit more autonomy over how you work. There’s a line in the research from one economist that really captures this shift. The more intelligent the technology becomes, the more your own intelligence matters. If you already have expertise, AI can make you faster, more productive, maybe even better at what you do. But if you don’t yet have that foundation, it’s much harder to extract real value from it.
There are other factors at play too. The research points to corporate training as one of the biggest drivers of AI use at work. So organizations that actively support and encourage adoption are seeing much higher uptake. And interestingly, the heaviest users of AI aren’t the youngest workers, as you might expect, but people in their 30s with more experience behind them. So again, this isn’t a generational story so much. It’s about how AI fits into the structure of work itself.
If AI is boosting the productivity of higher earners more than lower earners, then over time you’d expect the gap to widen in output, in value, and potentially in pay. And there’s a second order effect that’s a bit more subtle, but potentially more significant. If AI starts to take on some of the routine or entry level tasks that junior staff would traditionally do, then where do people build the skills? How do you develop expertise if the work that teaches you the fundamentals is increasingly being handled by a machine?
So instead of AI acting as a ladder, help people climb, there’s a risk it starts to pull away some of the rungs. And this is where it connects directly to leadership and to communication. This isn’t just about who has access to AI tools. It’s about who feels able to use them, who is encouraged to use them, who is trained to use them well, and who is supported in making sense of what they produce. So this is about culture, not technology.
If organizations simply roll out AI and assume the benefits will spread evenly, they may find the opposite happens, that they’ve unintentionally widened the gap inside their own workforce. So perhaps the real question here isn’t whether AI will level the playing field. It’s whether leaders and communicators advising them are actively shaping how that playing field is changing or just watching it tilt.
Shel: Yeah, that training point is, I think, really critical. And a project manager, an accountant, field supervisor, an HR business partner, and a communication specialist don’t need the same training. They also don’t need the same examples delivered by communications. So the communicator’s contribution here is translation. Here’s what this means for your role, for your task, for your team, and your day.
That includes, you know, surfacing success stories from unexpected parts of the organization. I would love to find an example of, a foreman on a construction project site using AI. I don’t want to just report on what the IT department is doing and the other, you know, tech forward departments are doing. You know, the goal shouldn’t be everyone becomes an AI expert. The goal should be that nobody’s quietly excluded from
the next operating model because they don’t see how AI fits in their work.
Neville: Yeah, we’ve talked about this multiple times, Shell, in various episodes, which is who feels able to use such tools. And that comes, that stems from the leadership communication, in my opinion, that has to encourage people to do this. They feel they’re being empowered. They feel they’ve been given permission to do this. And they know they can count on help to when they get stuck with something. That, is
hardly uniform in just any organization, frankly, and this isn’t about, we’re going to create the special department to do this, this needs to permeate across an organization. So you’ve got leadership at the very highest level, filter that down to your local manager, your line manager, or whoever you report to has got to encourage you as well. And I’m sure that’s that happens in many organizations, but to make this really work, that doesn’t result in the gap widening between those who are
naturally excited about this and have the experience and the knowledge and the expertise to know how to get value out of these tools. You’ve got to have something in place that helps everyone else who isn’t like that. And there’s a challenge for communicators without any question.
Shel: Well, for the whole organization, I mean, as I talk to people in other companies, it seems that we’re still in that experimentation phase that I think most organizations should be beyond by now. But the way it’s working right now in a lot of companies is the curious employees can try the tools and cautious employees can wait and everyone else will eventually catch up. that’s not going to work. mean, if this is becoming a material productivity and capability layer,
Neville: As well.
the
Shel: We need to implement intentional adoption strategies. That means making role specific examples and approved tools and safe use guidance and peer demonstrations. And we’re trying to do this where I work is get peers showing other employees what they’re doing. Psychological safety, plain language explanations of what employees are supposed to be doing with this. So all needs to be put in place and communications has a role to play here, but
If we don’t, adoption is going to follow the path of least resistance, and that’s toward the people who have the power, the time, and the digital fluency, and then you’re going to end up with that gap.
Neville: Work to do here.
Shel: Yeah, and by the way, there was another part of the FT’s reporting that I found really interesting that you didn’t mention. And that’s that men are more likely to use AI tools than women across a number of sectors. And I think that should concern leaders because AI fluency is becoming part of professional competence. And if men, along with higher earners and more experienced workers, are building fluency faster,
What’s going to happen? And you know, the performance evaluations and promotion decisions and the visibility of the employees who are getting that kind of attention and informal influence may start reflecting AI access rather than raw ability. And here again, there’s a role for communicators by pushing AI enablement into say, manager toolkits, into your onboarding processes, into your training and team level norms is important.
as opposed to just letting it sit as an informal advantage for people who are already competent.
Neville: Yeah, like I said, work to do here.
Shel: Thanks, Dan. I am looking forward to seeing this WordPress release. I have to say, I really like the idea of collaborative editing. As you know, the FIR podcast network website is on WordPress and Neville, you and I both use it and the ability for both of us to go in and deal with that in more of a Google Docs setting than logging in and just pulling up the post.
makes sense to me. I definitely do see the issues with this as well, though, but it’ll be interesting to see this and the other changes as well. So thanks for the report, Dan. Really, really interesting. Well, we’re to stick with the AI theme again, probably not surprising given the impact that it’s having. And by the way, I have to say that when I scroll through LinkedIn, it’s got to be 80 % of the posts I see now are
AI related and that’s not hyperbole. It is a guess. I haven’t measured, but man, it is all AI all the time on LinkedIn. That’s what people are talking about. And it’s changing the relationship between PR professionals and journalists, just not the way a lot of people expected it would. The fear was that AI would automate the work. We’d have a lot of AI written press releases and AI written pitches and articles.
And yeah, there’s definitely a lot of that happening and people are calling it out. But the more interesting shift is not that AI makes it easier to produce more content, it’s that AI makes bad media relations more obvious and more damaging. Pete Pachal, who was a guest on FIR interviews, what was that Neville, about a year ago? Yeah, he makes this point in an article in Fast Company, AI is becoming a new interface.
Neville: A year ago,
Shel: For how information is found, prioritized, and interpreted. Journalists and PR people are both affected because AI systems more and more shape which stories surface, which ones get cited, and which narratives get visibility. Pete’s argument is that the advantage doesn’t go to the people who can generate the most material. It goes to the people who produce original reporting, useful expertise, clear narratives, and trusted relationships. That’s an important distinction for people who…
operate in the media relations world. AI can help you write faster, but speed was already part of the problem. Journalists were already drowning in irrelevant pitches before generative AI showed up. AI just gives every mediocre PR practitioner a way to send even more mediocre pitches even faster. The results not greater efficiency, it’s more noise. And journalists are noticing.
PR Daily reported on a global results communication survey of nearly 1,700 reporters across print, digital, and broadcast. 81 % said pitches and relationships with PR professionals are vital to their work. So journalists aren’t saying we don’t need PR, but 43 % expressed negative views about AI-generated pitches, saying they read like a bot wrote them and that they lack perspective and erode editorial trust.
So here’s the conflict. Journalists still need PR. They need access and sources and data and context and story ideas, but they’re getting a lot less tolerant of anything that feels mass produced, poorly targeted or synthetic. Medianet’s 2026 Media Landscape Report, based on feedback from 800 journalists, makes the same point more sharply. The report says three quarters of journalists have received pitches that appeared to be AI generated.
and about half said they could always detect machine written copy. I would argue with that, but let’s not go down that rabbit hole. The same report says 86 % of journalists now cite press releases as a key news source, which means the press release isn’t dead, but the stakes for credibility are higher. There’s also a widely circulated LinkedIn post citing the media net research saying 78 % of journalists report that receiving an AI written pitch
decreases their trust in the PR person who sent it. That’s consistent with the other findings. Journalists aren’t rejecting AI assistance, they’re rejecting lazy use of AI. So what should PR practitioners be doing differently? I’ve got five things. First, stop using AI as a pitch factory. This is the most obvious trap. If the output is a generic email with a personalized opening line,
and a weak story angle, AI hasn’t made you better, it’s made you faster at getting ignored. Second, use AI before the pitch, not as a replacement for your judgment. Use it to analyze everything the journalist has written recently, summarize themes, identify gaps, pressure test whether the angle is timely, and prepare sharper source material.
P.R. Daley’s piece makes this point well. AI can help with research, angle testing, drafting, editing, personalization, and follow-up prep, but the human edit is where you add that credibility. Third, bring journalists something they can’t get from a model. That means original data, direct access to informed sources, a useful articulate expert, a local angle, a contrarian but defensible point of view, or a story that fits the reporter’s audience.
Fourth, be transparent internally about what AI can and can’t do. PR leaders should have rules. AI can help research, structure, brainstorm, and edit, but it should not invent relevance, fake familiarity, fabricate personalization, or send anything without human review. And fifth, think beyond the pitch. In an AI-mediated media environment, you’re not just trying to get a reporter to open an email.
you should be trying to build a public record of expertise and credibility. That includes owned content, executive visibility, contributed thinking, data assets, analyst material, podcasts, newsletters, earned media, anything that reinforces a coherent narrative that AI systems will recognize and retrieve. So the future of media relations isn’t more automated pitching.
The future is more precise, more evidence-based, more relationship-driven, and more strategic. AI will handle more of the mechanics, but judgment, relevance, trust, and access become more valuable, not less. In other words, AI doesn’t eliminate the relationship between PR and journalism. It raises the penalty for abusing it.
Neville: Yeah, it’s a it’s it’s an interesting topic without doubt. I was actually pretty impressed with the five points mentioned by Courtney Blackan in the PR Daily report. And it mirrors, frankly, almost everything we’ve talked about in this episode so far. And indeed, in recent episodes that we have to keep repeating this really, Shel, and you’ve done a good job, I think, at outlining this is what you’ve got to do.
And it’s about this, but it also relates to these other 10 things we’ve talked about. But a couple of things that struck me here that really, really do resonate well. mean, research smarter, that makes complete sense. I mean, that’s got to be your starting point. But things like draft faster and edit harder. I like that one, I must admit. So you use AI to an AI tool to organize your ideas.
into a structured draft or just simply improve the overall language that you’ve done and rewrite some of it. To kind of anticipate criticism from those who don’t think I should be involved in any of this, I liken it to that’s what you’d be asking a colleague to do or that freelancer that you’ve hired to help you work on this. You’d be giving them the same request as you would to the AI tool.
So what’s the difference? One’s not a human. That’s probably the biggest difference. But I don’t get swayed by any of those arguments about you can’t use AI to do this. Now, of course, you’ve got to use it. The caveat is, for God’s sake, don’t just copy and paste that into your document in the center. Now, this is your assistant, not your creator. You’re the creator. And this helps you create very well, typically, all other things being equal. But I like that draft faster, edit harder.
And it’s kind of like your A-B testing or A-B-C testing possibly with the AI assistant to help you do this quite quickly. And it’s great. Personalize with precision is another one she mentions. Don’t blast out the same email to 50 journalists, which is what many people do it seems to me. You’ve got the ability to personalize those emails. And again, you know,
Your AI can help you with drafting that. So it will need to know quite a bit about the journalists and your relationship with them if you’re the PR person. So quite a lot of prep work you’d need to do here first. But the output from the AI will be pretty good if you do it right. So these are things that take your method that you might currently be using, which is prompt the AI to a totally different level.
And that’s what you got to be thinking about now because this is where it’s all going. This is not way beyond just simply a chat bot. So it’s a really good topic. And these reports that you’ve highlighted, Shel, are great. Pete Pachal’s post is excellent. We’ve got to him back for back again from another interview, I think, because we interviewed him when he was just starting his business. And he’s gone places for that business now. So it’s worth reading.
Shel: Yeah, I think so.
Neville: That and the PR Daily report. do like those five points.
Shel: Yeah, remember, remember we interviewed Aaron Kwittken from PRophet, that’s PRophet with a capital PR. And one of the things that’s, yeah, it was. And one of the things that system did was identify reporters who have written about this topic. It reviews the content that they have written over the past recent period and crafts a personalized pitch for each of them, which then you can go in and edit. I don’t think you
Neville: Yeah, I do. Yeah.
Quite a while ago.
Shel: Sorry, Aaron, if you’re listening, I think you provide a great service, but people don’t need this anymore because you can create an agent that does that, identify the reporters who have written about this, review their most recent articles and craft a pitch for this press release. That can be done now internally with an agent that would probably take about an hour to create. mean, agents can go out and do amazing things now. Chris Penn.
just wrote a post, he found somebody’s wallet on the street and it had enough stuff in it that he could spend a few hours tracking down who owned it. There wasn’t a driver’s license with an address. There was some cash, there were a couple of debit cards, but he was able to give an agent all of that information and go do his work on something else. And after a couple of hours, it said, I’ve narrowed it down to these three people. And Chris was able to look at those three people and figure out which one it was.
lived really nearby and got the guy’s wallet back. We can do this kind of thing now in pursuit of PR objectives. The other thing I want to say is that I’ve gotten in the habit now of recording interviews and giving the transcript to AI and say, organize this into a first draft of a press release of an article of a change notice, whatever it might be. And I don’t copy and paste that in. That’s a first draft. It’s absolutely.
a case of draft quickly and edit hard. I hadn’t heard that framing before, but it’s absolutely what I do these days because it just saves a lot of time and gets me into the nuts and bolts of making this relevant without having to spend half that time just reviewing the transcript and organizing that into sort of a logical flow.
I think it’s a great use of AI and it’s one that I’ve been using for, geez, a couple of years now.
Neville: Is. I agree. So there are things that you’re accustomed to that work for you. But pay attention to this kind of thing, because this is taking it to another level that will benefit you. You just will not just but you need to clearly understand what this is. And Pete’s article, the PR Daily piece are two sources that will help you do that. But it’s definitely worth a look.
Our next story is a very different one. It’s not about something going wrong. It’s not about AI, but about something going exactly to plan. Apple announced that Tim Cook will step down as CEO later this year and become executive chairman with John Ternus, currently head of hardware engineering, taking over the role. On the face of it, this is a major moment. A CEO transition at a company of this scale. It’s what, revenues?
trillion, second wealth, the most valuable company in the world currently. It was number one not long ago, so it might retain that. It often creates uncertainty internally in the markets and across the wider ecosystem. What’s striking here is how little disruption there seems to be. There were no leaks. The announcement landed cleanly. The market reaction was muted and the tone throughout is calm, controlled and focused on continuity.
And that’s the real story, I think, because this isn’t just a leadership transition. It’s a masterclass on how to communicate one. If you look at the messaging, everything reinforces stability. isn’t disappearing. He’s staying involved as executive chairman. Ternus isn’t positioned as a bold new direction. He’s presented as a long standing insider, deeply embedded in Apple’s culture and products. There’s no sense of rupture, just a steady handoff.
The most important part of this story though is the announcement itself. It’s what happened before the announcement. Ternus didn’t appear overnight. He’s been gradually made visible over several years, fronting product launches, appearing in keynotes, becoming a familiar presence. So by the time this announcement arrives, it doesn’t feel like a surprise. It feels like confirmation. And that’s the key insight. This transition didn’t start with a press release. It started years ago.
What Apple has done is build familiarity, credibility and trust in the successor long before the moment of change. So when the change comes, the narrative is already understood. And that changes everything because most organizations treat moments like this as announcements, whereas Apple treats them as outcomes, the result of a story that has been deliberately shaped over time. That has practical implications because when transitions feel chaotic or disruptive, it’s often not because the change itself is unexpected.
is because the story hasn’t been prepared. The successor isn’t known, the narrative isn’t clear, the organization is reacting in real time. Apple avoids that entirely, not by communicating more in the moment, but by communicating earlier, by building trust before it’s needed. And that’s where this becomes relevant for leaders and for communicators advising them. The real question isn’t how do you announce a change, it’s how early you start preparing people to understand it.
Shel: Yeah, they didn’t treat this as a sudden disclosure. This was more continuity without pretending that nothing was changing. Right. It does a lot of reassuring work, not just about Cook’s remaining in the new roll through and actually staying in the current role through the summer and then staying as executive chairman, who’s going to work closely with Ternus during the transition. Also talked about
Ternus’s ties to Steve Jobs and Apple’s mission and its values. And that language isn’t an accident. I think the lesson for communicators is that the leadership transition needs facts and emotional reassurance, right? Employees don’t just wanna know who reports to whom. They wanna know whether the company they believe in is still the company they believe in. I do like in PR Daily’s report, the discussion of different audiences.
They didn’t send one announcement everywhere. They had public messaging and employee facing messaging and they both serve different purposes, right? The public version celebrated the legacy and confidence they had in this transition. The employee version was warmer. It was more grounded. And I mean, this is communication 101 in a lot of senses, but still something that we should emphasize. Consistency doesn’t mean identical language. Employees…
deserve a message written for employees, not a copy of the press release with Dear Team pasted on top.
Neville: I agree with you, Shel. This is an excellent example of how to do that. And yes, there wasn’t a single message. That’s very true. It was tailored messaging that showed clear understanding of those different audiences internally and externally. So a lot you can learn from that. And indeed, Ragan’s article by Allison Carter has some good insight in there that you can glean learning from. It’s worth reading that article too.
So I call it a masterclass. It probably is one of the best examples I’ve seen. Not so much the press release, but understanding about what led up to that and all the other communication that then occurred, the buildup. And I realized too, of course, that some organizations won’t know until nearer the time of announcement that there’s going to be a change. So this isn’t necessarily a blueprint you can apply to everything. But in the case of Apple,
It’s a very, it has a big effect on people news about what Apple’s doing. Steve Jobs was a magnetic personality or mercurial personality who famously coined this great phrase, I’d apply to Trump, often the reality distortion field that was his trademark in a sense that he was mercurial without doubt in leading. one thing that is notable, although it’s certainly not emphasized in any way that
that Ternus is a hardware guy, whereas Cook is a management guy. So Cook took over from Steve Jobs and transitioned Apple over that decade and more period to where we are now with the changes going on in the world generally and the tech industry in particular, that it probably does require more of a hard-nosed technological approach than purely business management to leadership.
And of course, if Cook’s going to be the executive chairman, he’ll be there to assist here and there. interesting time looking at a company like Apple to see this happen.
Shel: Yeah, and the press release sends some messages without explicitly saying anything. First of all, the fact that they did pick a hardware hardware guy says a lot about where Apple is heading. They faced a lot of criticism for their failures around artificial intelligence, which isn’t even mentioned in the press release in that sense.
Neville: Yeah, does.
not mentioned.
Shel: Message. What have been Apple’s wins under Cook’s leadership? I mean, the Apple Watch was a big one, and a lot of people thought it wasn’t going to be. They kind of laughed when it was introduced. But there are a lot of people wearing Apple Watches out there now. Big success. But mainly he consolidated manufacturing in China, which may not end up being a great thing. But it’s he I mean, it’s made them a ton of money. What did he do? Triple?
their revenues, as you said, they’re the number two most valuable company in the world. Now they’re gonna refocus on hardware, on product, the stuff that has made Apple from the get-go. Software, mean, you can talk about iOS and the computer software platforms they produce, but you never hear a lot of…
Discussion of those at their big events. It’s, you know, we’re coming out with a watch. We’re coming out with a Vision Pro, which has been something of a failure. So this is a reemphasis on hardware. They’ve made that point. Casey Newton came right out on hard fork and said that Ternus’s first act should be just doing the deal with Google to integrate Gemini into Siri and be done with that whole thing because Siri was supposed to get that AI update.
It’s been a couple of years now and it just hasn’t happened. So they have said a lot in this press release and not all of it was necessarily explicit.
Neville: No, you’re absolutely right there. So interesting time in the tech industry generally and seeing where what happens with Apple in the coming year or so.
Shel: Well, my favorite new word is slopaganda, and this refers to AI-generated propaganda, that cheap, fast, emotionally loaded, and designed less to strategically persuade anybody about anything than it is to just flood the zone with images, memes, fake scenes, shareable outrage. The most visible example of slopaganda right now is Iran’s use of AI-generated, Lego-style videos aimed at Donald Trump, Israel, and the U.S.
They’re far from subtle. show caricatured Lego versions of Trump, Benjamin Netanyahu, missiles, burning ships, collapsing American power, and they use rap tracks, absurdist humor, conspiracy references, and the visual grammar of social media and not the language of state diplomacy. The New Yorker reported on Explosive Media, which is an Iranian digital media enterprise, which got started posting
pretty routine anti-Western content that didn’t get a lot of uptake. Then they discovered that these AI-generated Lego-style propaganda cartoons was its breakout format. The clips accumulated millions of views. They were reshared by Iranian government accounts. They were promoted by Russian state media and even picked up by anti-Trump protesters because the imagery was so flamboyantly anti-Trump.
The group told the New Yorker that it could produce a two-minute video in about 24 hours. LeMond adds an interesting scale point. According to Cyabra, a company that analyzes content to distinguish authentic activity from coordinated manipulation, that’s right off their website, according to them, pro-regime videos received more than 145 million views across X, Facebook, Instagram, and TikTok during the second half of March.
Explosive media eventually acknowledged to the BBC that the Iranian state was one of its clients. It had initially claimed that it was all independent. And this has captured a lot of attention, first because it’s visually disarming. Lego’s familiar, playful, global. It turns geopolitical violence into something that looks like entertainment. Analysts say the Lego format serves as a kind of Trojan horse.
reaching people who wouldn’t otherwise engage with war-related content. It also works because it’s emotionally true to people who’ve always wanted to believe the underlying message. Viewers may not literally believe Iran is winning the war in the way videos depict, but they can choose to believe the emotional premise that the U.S. is weak, Trump is ridiculous, and Iran is standing up to a global oppressor.
And it works because it speaks the language of the target audience. This isn’t old school propaganda. It’s fast, caustic, meme-literate, and platform-native. In information warfare terms, this gives Iran something it used to lack, cultural reach into Western audiences. It lets Iran fight asymmetrically using ridicule and narrative disruption where it can’t match the U.S. militarily. But this is not only a geopolitical story.
The same tactics are going to show up in business. Maybe not tomorrow in Lego form, but the pattern’s just too useful to stay confined to politics. An activist shareholder can use an AI-generated video to ridicule a CEO, to dramatize a company’s alleged mismanagement, or turn a dry governance dispute into a viral morality play.
A disgruntled customer could generate convincing scenes of product failure, employee misconduct or customer mistreatment. A labor dispute could be amplified with synthetic stories that blur the line between real worker grievances and invented incidents. An unscrupulous competitor could see just asking questions content that implies safety failures, financial instability, executive hypocrisy or environmental misconduct. An example from Canada.
matters here. The Canadian Digital Media Research Network identified a coordinated network of 20 inauthentic YouTube channels targeting Albertans with nearly 40 million views. The channels exploited real grievances and pushed narratives normalizing a move for secession and even U.S. annexation of the province. The report says the accounts pushed an Albertan perspective
that researchers found absolutely no evidence that the account owners were actually Albertan. That’s the bridge to business. Slopaganda doesn’t have to invent grievances. It can exploit real ones. A company with a safety incident, a layoff, a product recall, a labor dispute, or an unpopular executive decision is already vulnerable. AI just makes it easier for hostile actors to package that grievance
into emotionally potent, shareable content. So what should communicators do about this? Well, first, obviously, build monitoring capability for synthetic narratives, not just mentions. The risk isn’t one fake video. The risk is a pattern. Repeated themes, recycled scripts, coordinated accounts, sudden spikes, and emotionally consistent attacks. Second, prepare your verification protocols now.
If a video appears showing something damaging, who determines whether it’s real? Legal? Security? Coms? IT? Outside forensic consultants? You know, that first hour is really important. So knowing who to go to to find out whether this is real or not is really critical. Next, strengthen your owned record. If AI systems and social audiences are going to interpret your organization through fragments,
Make sure there’s a clear, accessible, credible body of truth. Your policies, your timelines, FAQs, source documents, leader statements, and plain English explanations. And finally, scenario plan for synthetic outrage. Not just misinformation, but ridicule. Means move differently than allegations. A dry correction rarely defeats a funny attack. Communicators need response options that are fast, human, factual, and proportionate.
And, you know, one last question to address here, should communicators use Slopaganda themselves? No, they shouldn’t. Not if we’re talking about deceptive, synthetic, emotionally manipulative content designed to obscure truth. That’s not communication, man. That’s reputational arson. But communicators absolutely should learn from the format. AI-generated creative can be used ethically.
if it’s clearly labeled truthful, brand safe, and grounded in real information. But understand attention has moved toward visual, fast, emotionally resonant storytelling, and we should move along with it.
Neville: Yeah, it’s an interesting topic, isn’t it, Shell? I think you’re kind of no communicator should not do this. That’s a message clearly the US government’s ignoring, judging by what they have been doing, or the White House, should say, that’s reflected back in what the Iranians are doing and their proxies and indeed in individuals by the thousands doing the same. So misinformation, disinformation, fakery, it’s everywhere.
I read a post about this at the end of March that looked deeply into what AI generated, how it’s being used by both sides. And there’s a number of reports, notably Deutsche Welle, the English language news service from the German broadcaster and France 24 as well, had some really, really good, well researched articles with examples of
what’s happening in this area. There’s a great one someone posted showing a Lego box of, you can visualize it from the description that we see on the TV news all the time. Residential buildings, apartment blocks in ruins blown to bits. And this is made out of Lego bricks. And it’s, you know, Lego logo, and it looks exactly like a Lego product. So brand
a brand is being, you know, brought into this unwittingly. But the reality is that communicators are in between a devil and deep blue sea here, I think, because if you’re in a business, you’re not in the defense industry, you’re not involved in anything with a war going on. Yet some of your clients are kind of on the fringes of all this by the nature of their business. So
they’re dragged into in the case of Lego, good example, what do do about it? Do you respond in kind with some kind of jokey thing about the, you know, this, you know, whatever it might be in Iran, this example. It’s a it’s, it’s a call, I would say, there seems to be a movement, if you like, to this kind of thing is a matter of normality.
And I think it’s very dangerous. Philip Boramantz had a really good piece in the middle of March on what this war is teaching us about communications generally, not specifically crisis, although it’s mentioned in there. The BBC had a report early March about AI-generated Iran war videos, surge of those as people have the tools to create these things. So that is part of our landscape.
So it’s something that communicators, it’s a question with no easy answer. The question you asked is not an easy answer, but it may be the one that we have to find an answer to. I mean, that’s easy to say that. I mean, I don’t know what that answer is. So it is that the most striking thing occurred to me is that the, not the sophistication of these tools are not slickly produced. They are produced.
I could you say it’s for those who are savvy with social media and social networks and what works in terms of spreading what is spreadable? What is memeable? And we are not part of that. And if you’re not, you’re people talking about your brand and you and you’re not there. So, you know, it’s a big question.
But it’s something we have to try and understand what’s happening and somehow come up with an answer.
Shel: Yeah, you raise an interesting point about brand safety. Is Lego going to issue a takedown notice to Explosive Media, which is a digital media company in Iran, probably not knowing that they’re not going to respect a takedown notice or there’s no court that you can go to necessarily. So basically you kind of have to live with it if you’re Lego. And I suspect that’s what they’re doing. But from a communication standpoint,
It’s really important to understand that the Iranian produced stuff is getting far more traction than the US produced stuff. And the reason for that is that it leverages grievances the Iranians already had and other people around the world already have with the US. The US stuff is just showing the attacks on Iran. And if you think about the average American or perhaps even the average Brit, what grievances do they have against Iran?
I mean, the grievances here are within the government, not within the broad population. And that’s why these are so effective, is that the Iranians and other populations around the world do have grievances, justified or not. So as you look at this stuff moving into the business world, consider what kind of grievances people might have with your organization. That’s where they’re going to attack you. That’s where you have to build up your defenses now before they do.
Neville: Yeah. So it’d be interesting to see where are we at nearly halfway through the year, not quite actually. So quite a bit less than halfway. Yeah, it is. But it makes me think I wonder what, you know, the big picture of trust and the reporting we see on that notably the Edelman Trust Barometer. What changes are we to see as this year plays out as it were? We have a war in the Middle East that
Shel: It’s still going pretty fast.
Neville: Anyone who even has a fleeting interest what’s going on in the Middle East knows that this is situation that has been the case for millennia, frankly. But in modern times, this has been since 1948 in the creation of the State of Israel. This has been happening in the Middle East. A war one way or another between tribal factions and then states have gotten involved in this. Iran
from what I can understand, has long been a thorn in the side of US governments over different presidents over the decades. It doesn’t resonate that way here, notwithstanding some things that have happened, but there were decades ago in the UK that, you know, the notion of the fact the US really was the only country that could do what they did to to bomb Iran and start a war to undeclared.
not asking anyone to help them and then complaining when no one came to their help. So it’s a dreadful situation, the war itself, obviously, but also the murkiness of what it has created in the context of what we’re talking about. That, you know, we’ve talked about this element before, which is that, you you do not control the message anymore, even if it’s about you. That is no
more true than what we’re seeing right now. The Iranian government doesn’t control any of the messages, not really. It’s anyone who’s got an internet connection and a tool to create an AI generated video or whatever it might be, and then share it online. That’s who’s got control, but only in a limited way because it then goes out there and anyone can do anything with it. It’s making it onto some
traditional media, not just social. So it’s who knows where it’s all going to go, Shell. And as this war continues without any sign that it’s going to suddenly stop, this is the new normal.
Shel: Yeah, and keep in mind, we’re not talking about a single piece of content. We’re talking about flooding the zone with multiple pieces of content that reflect the same grievance and make the same points and have the same punchline that get people to watch it and have it appear wherever you might be getting your content. So you got to look at it that way and take steps to deal with because.
Yeah, I don’t have a good business example yet, but I’ll happily make a bet with somebody that within two years, we’re going to see this kind of content aimed at business from that disgruntled investor or unhappy customer or whoever it is. So this is so easy to do. And that’ll wrap up this episode of For Immediate Release. Our next monthly long form episode is scheduled to drop on Monday, May 25th.
Neville: Great fun.
Shel: So we’ll be recording on Saturday, May 23rd. In the meantime, we hope you’ll comment. As always these days, all of the comments that we shared in this episode came from LinkedIn. And you’re welcome to look for our announcements of new episodes on LinkedIn or Facebook or threads or blue sky. And we’ll check for comments there, but you can also send them to [email protected].
I’m going to come up with a contest and probably announce it in the May episode for an audio comment. Anybody who submits an audio comment will put your name in a hat and draw a winner and you’ll get something. I’ll have to figure out what. We don’t have FIR merch anymore. Maybe you want to start that again. Now we’ll come up with something. We’ll come up with something. But you can leave an audio comment by attaching an MP3 file to an email.
Neville: Ha ha ha ha.
No, we don’t. Maybe we should, we’ll come up with something.
Shel: Or clicking the record voicemail tab on the right-hand side of the FIR Podcast Network website. You can comment on the show notes that we leave on the FIR Podcast Network. So many ways to leave a comment. And we also have a community on Facebook and an FIR page on Facebook. Any of those places will do. And we also hope that you will leave your ratings and reviews of FIR.
wherever you get your podcasts. And we will be resuming our short midweek episodes next week. Look out for those. Best way to have those is to subscribe to For Immediate Release. And that will be a 30 for this episode of For Immediate Release.
The post FIR #511: Doing AI Governance Right and Still Getting It Wrong appeared first on FIR Podcast Network.
27 April 2026, 7:01 am - 1 hour 10 secondsCircle of Fellows #127: The 7 Cs of The New Communication Compass, Part I
The next two “Circle of Fellows” episodes will offer something different from our panels of the last several years. We welcome Dianne Chase, a veteran communicator and former IABC chair, to the discussion. While Dianne is not a Fellow, she did recruit six Fellows to write all but one of the chapters for her new book, The 7Cs of the New Communication Compass. (Dianne wrote the seventh chapter.)
The book, which has five stars on Amazon, “offers both a guiding framework and a practical roadmap for mastering strategic communication in complex environments,” according to its description. “If you are a leader, manager, educator, public official, influencer, or anyone striving to make an impact, this book is an essential and thought-provoking read. It distills communication excellence to foster collaborative results and organizational effectiveness.”The book’s Cs include Collaboration, Connection, Compassion, Cohesion, Community, Congruency, and Calibration.
For the first of these conversations, Dianne will join Shel Holtz, Ginger Homan, Jane Mitchell, and Brad Whitworth to discuss Connection (Brad’s chapter), Compassion (Dianne’s chapter), Congruency (Jane’s chapter), and Calibration (Ginger’s chapter).
Join us for this very different “Circle” at noon EDT on Thursday, April 23. Participants in the live stream can ask questions and share comments, observations, and experiences, and become part of the discussion. If you’re not able to join us, you can listen to the audio podcast later or watch the YouTube replay.
About the panel:
Dianne Chase helps organizations and leaders harness the power of strategic communication to navigate crises, build trust, and drive positive change. With over two decades of experience in journalism and corporate communications, Dianne has developed a unique approach for training and consulting clients that combines crisis management expertise with the art and science of business storytelling. Dianne is an award-winning media, journalism, and strategic communication professional with profound expertise in communication disciplines, most notably crisis communication, issues and reputation management, media training, and executive communication. She is one of two people in the world accredited in the powerful GENIUS Business Storytelling methodology, created by international communications thought leader, Gabrielle Dolan. She is former chair of the International Association of Business Communicators, and author/editor of The 7 Cs of The New Communication Compass.
Ginger Homan, ABC, SCMP, IABC Fellow, counsels senior leaders seeking to bring out the best in their people and brands. Her award-winning communication model for driving transformation has been used to change behaviors, align cultures, and build thriving communities worldwide. Her work with senior communication professionals has enabled them to align their department goals with business goals, achieve measurable results, and expand their influence. Founder of Zia Communication, she is a seasoned speaker, coach, and workshop facilitator. Her clients include Walmart, the Walmart Foundation, the Walton Family Foundation, T.D. Williamson, CITGO Petroleum, Phillips Seminary, and MOSAIC. IABC, PRSA, and SMPS have honored Ginger’s work on the local, regional, and international levels. A past chair of IABC, her volunteer work has been honored with three IABC International Chair’s Awards for leadership, and she is a recipient of the Leadership Tulsa Paragon Award for work in her local community.
Jane Mitchell’s career began at the BBC in London on live TV programs. She moved on to producing award-winning films and videos for public- and private-sector organizations and to developing groundbreaking employee engagement programs. Since 2006, when she formed her own consultancy, she has guided organizations (some of which have experienced cultural trauma) in embedding values and ethics by understanding culture and leadership, and their link to high-performing, sustainable organizations. She has worked with Top 100 companies worldwide and is a regular conference speaker. Jane has been a member of IABC since 2008 and has served on local, regional, and International IABC Boards. In 2021, she was Chair of the (virtual) World Conference and became an IABC fellow in 2022. She is based in the UK and now spends the majority of her professional time as a Non-Exec on company boards and Employee-Owned Trusts.
Brad Whitworth, ABC, SCMP, IABC Fellow, is a pre-eminent thought leader, lecturer, and author in organizational communication. He has led global internal and executive communication programs at HP, Cisco, Hitachi, PeopleSoft, AAA, and MicroFocus. He holds an MBA from Santa Clara University and undergraduate degrees in journalism and speech from the University of Missouri. Brad lives in California, a wine country, and he grows Pinot Noir on his property. A former broadcaster, Brad has made more than 300 presentations to executives, communicators, and university classes worldwide. Brad is a past board chairman of the International Association of Business Communicators and a Fellow of the association. He is one of the authors of The IABC Handbook of Organizational Communication and the new IABC Guide for Practical Business Communication: A Global Standard Primer. He chaired the Global Communication Certification Counsel in 2021.The post Circle of Fellows #127: The 7 Cs of The New Communication Compass, Part I appeared first on FIR Podcast Network.
26 April 2026, 9:16 pm - 25 minutes 54 secondsFIR #510: Should Companies Embrace Shadow AI?
Employees have long found ways to use software tools to get the job done, even when those tools are not approved. It’s called Shadow IT, but ever since generative Artificial Intelligence hit the scene in 2022, employees have adopted a new version: Shadow AI. The company approves Microsoft Co-Pilot, but employees opt to use their smartphones or personal laptops, along with their personal accounts with ChatGPT, Gemini, Claude, Midjourney, or whatever best suits their needs.
For most companies, this is a problem that needs to be addressed through repeated policy announcements and vigorous crackdowns. One company, though, took a different approach. In this short, midweek FIR episode, Neville and Shel outline what the company did and how communicators might advocate for a version of this approach to aiding in AI adoption and speeding up productivity gains.
Links from this episode:
- The Hidden Demand for AI Inside Your Company
- Shadow AI Threat Grows Inside Enterprises as BlackFog Research Finds 60% of Employees Would Take Risks to Meet Deadlines
- FIR #419: Is Shadow AI an Evil Lurking in the Heart of Your Company?
- The Rise of Shadow AI is a Double-Edged Sword for Corporate Innovation
The next monthly, long-form episode of FIR will drop on Monday, April 27.
We host a Communicators Zoom Chat most Thursdays at 1 p.m. ET. To obtain the credentials needed to participate, contact Shel or Neville directly, request them in our Facebook group, or email [email protected].
Special thanks to Jay Moonah for the opening and closing music.
You can find the stories from which Shel’s FIR content is selected at Shel’s Link Blog. You can catch up with both co-hosts on Neville’s blog and Shel’s blog.
Disclaimer: The opinions expressed in this podcast are Shel’s and Neville’s and do not reflect the views of their employers and/or clients.
Raw Transcript
Shel Holtz: Hi everybody, and welcome to episode number 510 of For Immediate Release. I’m Shel Holtz.
Neville Hobson: And I’m Neville Hobson. There’s a quiet tension playing out inside many organizations right now. On one side you have leadership teams, IT, legal, and compliance, all trying to put structure, governance, and control around how artificial intelligence is used at work. On the other side you have employees who’ve already moved on. They’re not waiting for official tools. They’re not sitting through pilot programs. They’re not asking permission. They’re opening ChatGPT on their phones. They’re using Claude in a browser tab. They’re experimenting quietly, often invisibly, finding ways to make their work faster, easier, and sometimes better. And in many organizations, this shadow AI behavior is still being treated as a problem — something to restrict, monitor, or shut down. It’s a topic Shel and I discussed on this very podcast in episode 419 nearly two years ago, and it hasn’t gone away.
Neville Hobson: In fact, recent data suggests it’s accelerating. A study last November by Blackfog and Sapio Research found that nearly half of employees surveyed in the UK and US are using unsanctioned AI tools. Even more striking, 60% said they would take security risks with those tools if it meant meeting a deadline. So this isn’t fringe behavior — it’s become normal. An article in the Harvard Business Review this month argues that instead of treating unauthorized AI use as a compliance issue, organizations should see it as a signal — a sign that people are already finding value in these tools, even if the organization hasn’t caught up. We’ll explore that idea in just a moment.
Neville Hobson: The article calls this the hidden demand for AI inside your company. And when you look at it through that lens, the picture changes quite dramatically. Because instead of asking, “How do we stop this?” you start asking, “What are we missing?” The piece goes further than theory. It looks at what one organization actually did when it recognized this dynamic: BBVA, a Spanish multinational financial services company with more than 125,000 employees. Rather than clamping down on shadow AI use, they moved quickly to provide a secure enterprise environment. But more importantly, they didn’t try to control everything from the center. They took a different approach. They identified and empowered what they call “champions” and “wizards” — the people already experimenting, already curious, already building things. They created a network, a community of practice, a way for ideas, use cases, and practical solutions to spread peer to peer across the organization.
Neville Hobson: And the results, at least as reported, are striking: thousands of employees actively using AI tools, thousands of internally created applications, and measurable time savings of hours per person every week. But perhaps the most interesting part isn’t the numbers — it’s the philosophy behind it. The idea that successful AI adoption doesn’t start with a perfectly designed top-down strategy. It starts by recognizing that innovation is already happening, just not where leadership expects it. So the question becomes: do you try to control that energy, or do you find a way to harness it? And that opens up a much broader conversation, one that goes well beyond technology. It touches on leadership, trust, and culture — on how change actually happens inside organizations. And, importantly for communicators, on how you surface, legitimize, and guide behavior that may already be happening under the radar.
Neville Hobson: Because if employees are already using these tools — and most evidence suggests they are — then silence or restriction alone isn’t really a strategy; it’s a gap. So in this conversation, we want to explore that gap. What shadow AI really tells us about organizations today, whether the BBVA approach is something others can realistically replicate, and where the risks still sit, because they have not disappeared. And we should be clear: BBVA may be an outlier. It’s a highly data-mature organization with strong leadership alignment. Many organizations don’t have that foundation. So the question isn’t just whether this works — it’s whether it can work anywhere else. And what that means for the future of work, and for the role communicators play in shaping that future. Shel?
Shel Holtz: Well, a few thoughts, starting with the fact that BBVA has the financial resources to provide a secure environment for those tools that employees are using. There are many organizations whose IT budgets are razor thin and don’t have those resources, so they would need to figure something else out. But I think there’s a caution here worth raising. The numbers from Blackfog are real, even if the framing from the Harvard Business Review is optimistic: 34% of employees using free versions of tools when paid, approved versions exist; 58% of unsanctioned users on free tiers with no enterprise protections. The reframing from threat to signal doesn’t eliminate the exfiltration risk — it reframes how we need to respond to it.
Shel Holtz: Communicators should be careful not to let the BBVA-style narrative become an excuse to ignore governance. The right frame is: harness the demand, don’t suppress it, and build the governance at the same time. Employees using unsanctioned tools and putting secure data and company information into them — that’s a governance risk, and I don’t think we can ignore it. I mean, I think what BBVA did is great, and I think they baked it into some governance while looking at a new approach they could afford to take. But for many organizations, governance is still a requirement.
Neville Hobson: Well, I agree. It’s important and it’s not to ignore by any means. I think, Shel, you fleshed out a little bit the survey that I mentioned, which is actually useful to have that level of detail. But the big question for me is: if this is the picture in many organizations, according to that survey — compared to data previously — this is getting worse, or rather, it’s happening more frequently. People are just going ahead and using what works for them as opposed to what’s the official thing. What is that a symptom of? Maybe a lack of trust? It’s probably a mix of things. And to me, the communicator’s role here seems to be to try and help people on the one hand understand what the tools can do for them, and on the other hand to help the organization understand that we need to address this issue. People aren’t using the approved ones. They’re doing stuff on their own, and that isn’t good.
Neville Hobson: You mentioned security risks. The Harvard article goes into some detail about that, as indeed do the people who conducted that survey. You can just picture the severe risk. We’ve seen examples in recent months of organizations that have suffered from unauthorized use of unapproved software tools — not necessarily generative AI tools, but software certainly. And it’s a big deal. So the question — do you try to control all this and look at ways to stop it? — we asked this very question two years ago in our conversation, and we could probably just insert the recording from then and replay the answer. But let’s talk about it. I don’t think they should try and stop it personally. That’s a fail. There’s no win in that at all, certainly not for the organization. So how would communicators go about that, do you think?
Shel Holtz: Well, I’m not suggesting that organizations crack down on this and become Big Brother, looking at the tools that people are using, especially when they’re using them on their personal phones or personal laptops. But there are definitely things communicators can do. The first is to surface and amplify the internal use cases — not just the fact that people are using these tools, but what they’re using them for. When the security people and the legal people find out that this is actually driving effective work product from these employees, I think there might be more appetite for figuring out a way to bake this into the governance documents and policies the organization has established.
Shel Holtz: And I think giving employees permission narratives — telling them it’s safe to experiment, letting them know how to do it, and suggesting where the guardrails are — matters. So if you are using shadow AI, here are the things to be careful about. Let them know what the risks are and how to avoid falling into those traps. Communicators can also translate the IT and legal guardrails into plain language that doesn’t read as prohibition, because prohibition just leads to negative thoughts from employees about the organization, and then they’ll just continue using what they’re using. And then there’s collecting and routing the demand signal back to leadership. Why are employees using these when there are approved tools around? What are the advantages? So that leadership can make investment decisions that match the patterns of usage employees are actually engaged in. There’s a lot of work here for communicators that goes beyond simply saying, “Don’t do this.”
Neville Hobson: Agreed. And in fact, you can learn from much of what BBVA did, even if you’re not an organization with that established foundation and 125,000 employees. They did things most companies aren’t going to be able to do. For instance, they reached an agreement with OpenAI and deployed a customized version of ChatGPT Enterprise in a secured, exclusive cloud just for the company. The reasoning is interesting. What the Harvard Business Review report says is that the strategic decision was clear: it was more dangerous to have unmanaged, hidden AI usage than to rapidly deploy a managed, secure solution that aligns with existing needs. Most companies aren’t going to be able to do that. So it comes back to perhaps what you’ve just proposed — explaining it to people, the pros and cons, the risks, and so forth.
Neville Hobson: But I think you need more than that, too. Otherwise, you’re going to have significant numbers of people who will ignore it and just go ahead anyway with what they’ve been doing. So maybe elements of what BBVA did — for instance, the network of internal champions and expert wizards to spread knowledge, rather than the formal top-down communication you might expect. You’d have people within the organization who are knowledgeable, who have a history of responsible use themselves, who can help explain to others and help them replicate that. You end up, I think, with steps toward broad compliance that everyone can buy into. That would be helpful, because I can see that the idea of anyone in an organization of whatever size just doing their own thing with whatever tool they like is not a good idea at all.
Neville Hobson: And that isn’t unique to this. We’ve had that kind of conversation in decades past about software. I remember when Hotmail first came out, and when Microsoft Network first came out, the arguments in organizations — and indeed the one I worked for at the time — was, “You’re not allowed to use this on your company laptop, so use it on your own,” stuff like that. That’s definitely not a good thing. So you need to act to address issues like that so that people trust you and respect you and are willing to follow a restriction — or a behavior change, if you like — that would help. It’s interesting, the learning you can get from BBVA’s example, even though you’re not an organization that size with a budget to match. It’s a lot about education. It’s trusting employees, absolutely, as you pointed out, Shel. But I think that’s a two-way street. You need to have a quid pro quo: if you have these freedoms to use whatever you want, you need to do it responsibly. Share your learnings with others in the organization. Things like that. To me, that seems like a really good place to communicate.
Shel Holtz: Yeah, there’s communication happening at BBVA. They have 11,000 active users and 4,800 custom tools being used by those folks. That didn’t happen because the communications department posted an article about them. This was peers talking to their peers about what was working. It validates something you and I have been talking about for years, which is that authentic, lateral, employee-to-employee storytelling beats top-down cascades every time.
Neville Hobson: Precisely.
Shel Holtz: But it is communication. And why wouldn’t that be something the internal communications department jumped on and helped to facilitate — providing the channels for that, rather than the sneakernet that’s probably happening now? And also, because they’re engaged and trying to keep this from happening below the surface, they’re in a position to identify the use cases worth taking to leadership. The Blackfog survey you referenced found that almost 70% of C-suite executives believe speed is more important than privacy or security. So if people are getting things done faster — if you can demonstrate that there actually is productivity improvement happening, and it’s because of the tools employees are using that aren’t approved — I think that’s motivation for leadership to look at either approving those tools or finding ways to allow people to use their own accounts while protecting the integrity of their data.
Neville Hobson: Yeah. The results the Harvard Business Review reports from BBVA are worth noting, even though the scale isn’t what many companies would experience. They talk about 80% of usage of the system they set up coming through direct chat prompting, and the remaining 20% through employee-created GPTs. Now, this is not shadow AI — it was part of the rollout of what they did. But these numbers are quite impressive. Over 83% of employees now use the system every week, averaging 50 prompts per week. That’s above comparable enterprise deployments, says the review, quoting OpenAI. Users report average time savings of two to five hours per week — a number worth noting. More than 4,800 custom GPTs have been created internally, and they’re used three times more frequently than the enterprise average. So they’re ahead of the game in that regard. The article goes into more detail about which departments are more active than others, and so forth.
Neville Hobson: It also prompted a thought in my mind: the other surveys I’ve seen and other reporting on the resistance from leadership in organizations — that isn’t minor. It’s not a little thing. It happens, unfortunately, too frequently. I’m thinking of keystroke logging on employee usage, auditing computers surreptitiously and covertly without telling them, watching which apps they’ve installed — and indeed, probably more common, your company laptop refusing to install things that aren’t on an approved list, or reporting to IT that you tried to install stuff. This is a dreadful situation in organizations. It’s common, but we’re going to see more of it, I think, because that seems to be the way of the world these days on distrust. This is a diminished-trust environment we’re talking about. So in all of that, where do we sit in terms of enabling stuff like this? We can see the advantages of allowing employees to use tools like this. I think the better way is to try to do something within the framework of the organization — not, “Oh sure, go ahead and use ChatGPT whenever you want on any device, no big deal.” I wouldn’t be keen on that. I wouldn’t stop it, but I would look at ways of weaning people off that approach. We have to help them and encourage them to do this. And that, I suspect, is a hard task for communicators — to persuade leadership to do that if the climate in an organization is resistant to it anyway.
Shel Holtz: Well, I think it is a hard sell to leadership, but we have data. We’re supposed to be engaging in two-way communication and facilitating two-way communication. One of the roles of internal comms is listening. And it doesn’t have to be through direct information that you get from people through focus groups or surveys — it could be this Blackfog survey. When 49% of employees are using unsanctioned tools, and 63% think that’s fine as long as there’s no approved option for what they want to do, you may look at that as rogue behavior, but you can also look at it as market research. And communicators are the people in the best position to translate that data into something actionable for leadership. You take that to leadership and say, “Look, this is what’s happening. We’re the ones who can interpret what the behavior means and pass that along to leadership.”
Shel Holtz: I think part of our role is that listening through the data that’s already out there — and maybe what we can determine is going on in our own organizations — and taking that to leadership and saying, “Look, this isn’t going to go away if you crack down on it. It’s not going to go away if you block installation on company laptops. People have their own phones. People have their own laptops and tablets. This is going to continue.” And this isn’t new. I mean, this goes back to the earliest days of computers. I think I’ve mentioned this once or twice on the show, but I needed to produce charts and graphs in the mid-‘80s, and I wanted to use Harvard Graphics because somebody had shown it to me and it was what worked, and the company had a different program that was terrible. So I just used Harvard Graphics. I bought my own copy and installed it. There were no blocks back then — you put the floppy disks in the drives and it installed. People are going to do what they need to do to get the job done. Maybe some will pay attention to what the official rules are, but I think the governance needs to be flexible enough to adapt to this. I applaud BBVA for what they did. Again, I don’t think every organization is in a position to replicate it, but I think you can take lessons from what they did.
Neville Hobson: You can. Not everyone can roll out what they rolled out — enterprise licenses and so forth — but some of the things they went about, and how they went about them, definitely. One thing the review article points out quite strongly — a very, very good thing — is that they say, toward the end of their conclusions, that in whatever you do, there must be a hard human-in-the-loop rule. Human employees should always own the work. There should not be direct writes to core systems. Internal GPTs need quality scores and guardrails. They specify scope and context, include samples, and so on. This is simple, scalable, and non-bureaucratic.
Neville Hobson: So that’s something that kind of ties back into this emerging phrase — if it’s even emerging — of human-centered AI. Let’s look carefully at this. It’s about people first, technology second, and the human needs to be in the loop. The “hard human,” as the review calls it — I interpret that as meaning someone who’s actually cognizant, aware, and able to act upon things that matter, to keep humans in the loop, to own the work, not the technology. You’ve got to think about things like that. And I think for communicators, that’s an important aspect of what they do — having in mind that element that is about the people first. So when you’re trying to persuade leaders to take a course of action you’re recommending, this needs to be in your mind too: that the humans need to be in control.
Neville Hobson: I have to say, this is great. I love stories and examples like this. I love them more than the ones that talk about disasters, although those are useful to know about as well. Yet I feel, as communicators, we have a constant, constant task on our hands to explain this to people in organizations, to help others understand. I think this is a good example — the shadow AI element. For me, if I were actively involved in an organization as the communications person, I’d be looking at: how do I persuade people not to do that? How do I persuade people to use the approved stuff? But at the same time, how do I persuade the leaders to make sure they offer employees stuff that actually works, that’s in line with their expectations, all that kind of stuff? There’s a bit of a job on their hands. And if budgets get in the way, then you’ve got an even harder job. But hey, that’s what we’re here for. That’s part of what we have to do.
Neville Hobson: These are good examples you can learn from. There are elements you could start on. And I think, like most things, Shel, you need to say, “OK, fine — this idea has a dozen constituent elements, and let’s just start with two.” So you don’t try to think, “Oh my god, this is a massive project. How on earth can we do this?” You look at just a couple of things. I like another point the Harvard Business Review makes: ensure that managers know what they’re doing. You can’t expect managers to be persuasive in encouraging others to use AI if they’re not good at it themselves. So there’s another element — you need to train them well, says the Harvard Review. At a minimum, they should learn how to write staffing notes, sensitive communications, and KPI reviews with AI help. So there are some things you could do straight away as a communicator in an organization. I’d say: good luck and godspeed, and it’ll all work out in the end.
Shel Holtz: Yeah, a manager’s role in all of this is probably an episode in its own right. I would just reiterate the point you made about the human in the loop. This is a governance element that should be overarching — not applying just to shadow AI, but to all use of AI in the organization. It should be a primary consideration in governance, not to turn things over to AI. Otherwise, you end up with fake citations going out to clients that paid a million dollars for your work — another little slap on Deloitte’s wrists. And that will be a 30 for this episode of For Immediate Release.
The post FIR #510: Should Companies Embrace Shadow AI? appeared first on FIR Podcast Network.
21 April 2026, 12:54 am - More Episodes? Get the App